00:00:00.001 Started by upstream project "autotest-nightly" build number 3912 00:00:00.001 originally caused by: 00:00:00.002 Started by user Latecki, Karol 00:00:00.003 Started by upstream project "autotest-nightly" build number 3911 00:00:00.003 originally caused by: 00:00:00.003 Started by user Latecki, Karol 00:00:00.004 Started by upstream project "autotest-nightly" build number 3909 00:00:00.004 originally caused by: 00:00:00.005 Started by user Latecki, Karol 00:00:00.006 Started by upstream project "autotest-nightly" build number 3908 00:00:00.006 originally caused by: 00:00:00.006 Started by user Latecki, Karol 00:00:00.073 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.074 The recommended git tool is: git 00:00:00.074 using credential 00000000-0000-0000-0000-000000000002 00:00:00.076 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.112 Fetching changes from the remote Git repository 00:00:00.114 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.156 Using shallow fetch with depth 1 00:00:00.157 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.157 > git --version # timeout=10 00:00:00.209 > git --version # 'git version 2.39.2' 00:00:00.209 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.266 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.266 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/29/24129/6 # timeout=5 00:00:05.349 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.360 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.372 Checking out Revision e33ef006ccd688d2b66122cd0240b989d53c9017 (FETCH_HEAD) 00:00:05.372 > git config core.sparsecheckout # timeout=10 00:00:05.382 > git read-tree -mu HEAD # timeout=10 00:00:05.397 > git checkout -f e33ef006ccd688d2b66122cd0240b989d53c9017 # timeout=5 00:00:05.417 Commit message: "jenkins/jjb: remove nvme tests from distro specific jobs." 00:00:05.417 > git rev-list --no-walk 6b67f5fa1cb27c9c410cb5dac6df31d28ba79422 # timeout=10 00:00:05.540 [Pipeline] Start of Pipeline 00:00:05.554 [Pipeline] library 00:00:05.556 Loading library shm_lib@master 00:00:05.556 Library shm_lib@master is cached. Copying from home. 00:00:05.573 [Pipeline] node 00:00:05.585 Running on VM-host-SM0 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:05.587 [Pipeline] { 00:00:05.596 [Pipeline] catchError 00:00:05.597 [Pipeline] { 00:00:05.605 [Pipeline] wrap 00:00:05.612 [Pipeline] { 00:00:05.617 [Pipeline] stage 00:00:05.618 [Pipeline] { (Prologue) 00:00:05.632 [Pipeline] echo 00:00:05.633 Node: VM-host-SM0 00:00:05.636 [Pipeline] cleanWs 00:00:05.643 [WS-CLEANUP] Deleting project workspace... 00:00:05.643 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.649 [WS-CLEANUP] done 00:00:05.855 [Pipeline] setCustomBuildProperty 00:00:05.931 [Pipeline] httpRequest 00:00:05.947 [Pipeline] echo 00:00:05.948 Sorcerer 10.211.164.101 is alive 00:00:05.955 [Pipeline] httpRequest 00:00:05.960 HttpMethod: GET 00:00:05.960 URL: http://10.211.164.101/packages/jbp_e33ef006ccd688d2b66122cd0240b989d53c9017.tar.gz 00:00:05.961 Sending request to url: http://10.211.164.101/packages/jbp_e33ef006ccd688d2b66122cd0240b989d53c9017.tar.gz 00:00:05.962 Response Code: HTTP/1.1 200 OK 00:00:05.963 Success: Status code 200 is in the accepted range: 200,404 00:00:05.963 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_e33ef006ccd688d2b66122cd0240b989d53c9017.tar.gz 00:00:07.182 [Pipeline] sh 00:00:07.473 + tar --no-same-owner -xf jbp_e33ef006ccd688d2b66122cd0240b989d53c9017.tar.gz 00:00:07.483 [Pipeline] httpRequest 00:00:07.511 [Pipeline] echo 00:00:07.512 Sorcerer 10.211.164.101 is alive 00:00:07.519 [Pipeline] httpRequest 00:00:07.523 HttpMethod: GET 00:00:07.523 URL: http://10.211.164.101/packages/spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:00:07.523 Sending request to url: http://10.211.164.101/packages/spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:00:07.539 Response Code: HTTP/1.1 200 OK 00:00:07.539 Success: Status code 200 is in the accepted range: 200,404 00:00:07.540 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:01:46.015 [Pipeline] sh 00:01:46.299 + tar --no-same-owner -xf spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:01:49.594 [Pipeline] sh 00:01:49.871 + git -C spdk log --oneline -n5 00:01:49.871 f7b31b2b9 log: declare g_deprecation_epoch static 00:01:49.871 21d0c3ad6 trace: declare g_user_thread_index_start, g_ut_array and g_ut_array_mutex static 00:01:49.871 3731556bd lvol: declare g_lvol_if static 00:01:49.871 f8404a2d4 nvme: declare g_current_transport_index and g_spdk_transports static 00:01:49.871 34efb6523 dma: declare g_dma_mutex and g_dma_memory_domains static 00:01:49.890 [Pipeline] writeFile 00:01:49.909 [Pipeline] sh 00:01:50.188 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:50.199 [Pipeline] sh 00:01:50.479 + cat autorun-spdk.conf 00:01:50.479 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:50.479 SPDK_TEST_NVMF=1 00:01:50.479 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:50.479 SPDK_TEST_URING=1 00:01:50.479 SPDK_TEST_VFIOUSER=1 00:01:50.479 SPDK_TEST_USDT=1 00:01:50.479 SPDK_RUN_ASAN=1 00:01:50.479 SPDK_RUN_UBSAN=1 00:01:50.479 NET_TYPE=virt 00:01:50.479 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:50.485 RUN_NIGHTLY=1 00:01:50.486 [Pipeline] } 00:01:50.507 [Pipeline] // stage 00:01:50.525 [Pipeline] stage 00:01:50.528 [Pipeline] { (Run VM) 00:01:50.543 [Pipeline] sh 00:01:50.819 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:50.819 + echo 'Start stage prepare_nvme.sh' 00:01:50.819 Start stage prepare_nvme.sh 00:01:50.819 + [[ -n 7 ]] 00:01:50.819 + disk_prefix=ex7 00:01:50.819 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:01:50.819 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:01:50.819 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:01:50.819 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:50.819 ++ SPDK_TEST_NVMF=1 00:01:50.819 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:50.819 ++ SPDK_TEST_URING=1 00:01:50.819 ++ SPDK_TEST_VFIOUSER=1 00:01:50.819 ++ SPDK_TEST_USDT=1 00:01:50.819 ++ SPDK_RUN_ASAN=1 00:01:50.819 ++ SPDK_RUN_UBSAN=1 00:01:50.819 ++ NET_TYPE=virt 00:01:50.819 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:50.819 ++ RUN_NIGHTLY=1 00:01:50.819 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:50.819 + nvme_files=() 00:01:50.819 + declare -A nvme_files 00:01:50.819 + backend_dir=/var/lib/libvirt/images/backends 00:01:50.819 + nvme_files['nvme.img']=5G 00:01:50.819 + nvme_files['nvme-cmb.img']=5G 00:01:50.819 + nvme_files['nvme-multi0.img']=4G 00:01:50.819 + nvme_files['nvme-multi1.img']=4G 00:01:50.819 + nvme_files['nvme-multi2.img']=4G 00:01:50.819 + nvme_files['nvme-openstack.img']=8G 00:01:50.819 + nvme_files['nvme-zns.img']=5G 00:01:50.819 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:50.819 + (( SPDK_TEST_FTL == 1 )) 00:01:50.819 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:50.819 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:50.819 + for nvme in "${!nvme_files[@]}" 00:01:50.819 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:01:50.819 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:50.819 + for nvme in "${!nvme_files[@]}" 00:01:50.819 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:01:50.820 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:50.820 + for nvme in "${!nvme_files[@]}" 00:01:50.820 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:01:50.820 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:50.820 + for nvme in "${!nvme_files[@]}" 00:01:50.820 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:01:50.820 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:50.820 + for nvme in "${!nvme_files[@]}" 00:01:50.820 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:01:50.820 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:50.820 + for nvme in "${!nvme_files[@]}" 00:01:50.820 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:01:50.820 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:50.820 + for nvme in "${!nvme_files[@]}" 00:01:50.820 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:01:51.077 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:51.077 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:01:51.077 + echo 'End stage prepare_nvme.sh' 00:01:51.077 End stage prepare_nvme.sh 00:01:51.089 [Pipeline] sh 00:01:51.369 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:51.369 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -H -a -v -f fedora38 00:01:51.369 00:01:51.369 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:01:51.369 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:01:51.369 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:51.369 HELP=0 00:01:51.369 DRY_RUN=0 00:01:51.369 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img, 00:01:51.369 NVME_DISKS_TYPE=nvme,nvme, 00:01:51.369 NVME_AUTO_CREATE=0 00:01:51.369 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img, 00:01:51.369 NVME_CMB=,, 00:01:51.369 NVME_PMR=,, 00:01:51.369 NVME_ZNS=,, 00:01:51.369 NVME_MS=,, 00:01:51.369 NVME_FDP=,, 00:01:51.369 SPDK_VAGRANT_DISTRO=fedora38 00:01:51.369 SPDK_VAGRANT_VMCPU=10 00:01:51.369 SPDK_VAGRANT_VMRAM=12288 00:01:51.370 SPDK_VAGRANT_PROVIDER=libvirt 00:01:51.370 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:51.370 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:51.370 SPDK_OPENSTACK_NETWORK=0 00:01:51.370 VAGRANT_PACKAGE_BOX=0 00:01:51.370 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:51.370 FORCE_DISTRO=true 00:01:51.370 VAGRANT_BOX_VERSION= 00:01:51.370 EXTRA_VAGRANTFILES= 00:01:51.370 NIC_MODEL=e1000 00:01:51.370 00:01:51.370 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt' 00:01:51.370 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:54.676 Bringing machine 'default' up with 'libvirt' provider... 00:01:55.241 ==> default: Creating image (snapshot of base box volume). 00:01:55.500 ==> default: Creating domain with the following settings... 00:01:55.500 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721671746_2566fd4fbcb6343778fd 00:01:55.500 ==> default: -- Domain type: kvm 00:01:55.500 ==> default: -- Cpus: 10 00:01:55.500 ==> default: -- Feature: acpi 00:01:55.500 ==> default: -- Feature: apic 00:01:55.500 ==> default: -- Feature: pae 00:01:55.500 ==> default: -- Memory: 12288M 00:01:55.500 ==> default: -- Memory Backing: hugepages: 00:01:55.500 ==> default: -- Management MAC: 00:01:55.500 ==> default: -- Loader: 00:01:55.500 ==> default: -- Nvram: 00:01:55.500 ==> default: -- Base box: spdk/fedora38 00:01:55.500 ==> default: -- Storage pool: default 00:01:55.500 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721671746_2566fd4fbcb6343778fd.img (20G) 00:01:55.500 ==> default: -- Volume Cache: default 00:01:55.500 ==> default: -- Kernel: 00:01:55.500 ==> default: -- Initrd: 00:01:55.500 ==> default: -- Graphics Type: vnc 00:01:55.500 ==> default: -- Graphics Port: -1 00:01:55.500 ==> default: -- Graphics IP: 127.0.0.1 00:01:55.500 ==> default: -- Graphics Password: Not defined 00:01:55.500 ==> default: -- Video Type: cirrus 00:01:55.500 ==> default: -- Video VRAM: 9216 00:01:55.500 ==> default: -- Sound Type: 00:01:55.500 ==> default: -- Keymap: en-us 00:01:55.500 ==> default: -- TPM Path: 00:01:55.500 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:55.500 ==> default: -- Command line args: 00:01:55.500 ==> default: -> value=-device, 00:01:55.500 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:55.500 ==> default: -> value=-drive, 00:01:55.500 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-0-drive0, 00:01:55.500 ==> default: -> value=-device, 00:01:55.500 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:55.500 ==> default: -> value=-device, 00:01:55.500 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:55.500 ==> default: -> value=-drive, 00:01:55.500 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:55.500 ==> default: -> value=-device, 00:01:55.500 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:55.500 ==> default: -> value=-drive, 00:01:55.500 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:55.500 ==> default: -> value=-device, 00:01:55.500 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:55.500 ==> default: -> value=-drive, 00:01:55.500 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:55.500 ==> default: -> value=-device, 00:01:55.500 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:55.758 ==> default: Creating shared folders metadata... 00:01:55.758 ==> default: Starting domain. 00:01:57.134 ==> default: Waiting for domain to get an IP address... 00:02:15.323 ==> default: Waiting for SSH to become available... 00:02:15.323 ==> default: Configuring and enabling network interfaces... 00:02:17.856 default: SSH address: 192.168.121.49:22 00:02:17.856 default: SSH username: vagrant 00:02:17.856 default: SSH auth method: private key 00:02:20.384 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:28.490 ==> default: Mounting SSHFS shared folder... 00:02:29.057 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:02:29.057 ==> default: Checking Mount.. 00:02:30.430 ==> default: Folder Successfully Mounted! 00:02:30.430 ==> default: Running provisioner: file... 00:02:30.996 default: ~/.gitconfig => .gitconfig 00:02:31.561 00:02:31.561 SUCCESS! 00:02:31.561 00:02:31.561 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:02:31.561 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:31.561 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:02:31.561 00:02:31.570 [Pipeline] } 00:02:31.584 [Pipeline] // stage 00:02:31.591 [Pipeline] dir 00:02:31.592 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt 00:02:31.593 [Pipeline] { 00:02:31.604 [Pipeline] catchError 00:02:31.605 [Pipeline] { 00:02:31.615 [Pipeline] sh 00:02:31.887 + vagrant ssh-config --host vagrant 00:02:31.887 + sed -ne /^Host/,$p 00:02:31.887 + tee ssh_conf 00:02:36.072 Host vagrant 00:02:36.072 HostName 192.168.121.49 00:02:36.072 User vagrant 00:02:36.072 Port 22 00:02:36.072 UserKnownHostsFile /dev/null 00:02:36.072 StrictHostKeyChecking no 00:02:36.072 PasswordAuthentication no 00:02:36.072 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:02:36.072 IdentitiesOnly yes 00:02:36.072 LogLevel FATAL 00:02:36.072 ForwardAgent yes 00:02:36.072 ForwardX11 yes 00:02:36.072 00:02:36.086 [Pipeline] withEnv 00:02:36.088 [Pipeline] { 00:02:36.103 [Pipeline] sh 00:02:36.383 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:36.383 source /etc/os-release 00:02:36.383 [[ -e /image.version ]] && img=$(< /image.version) 00:02:36.383 # Minimal, systemd-like check. 00:02:36.383 if [[ -e /.dockerenv ]]; then 00:02:36.383 # Clear garbage from the node's name: 00:02:36.383 # agt-er_autotest_547-896 -> autotest_547-896 00:02:36.383 # $HOSTNAME is the actual container id 00:02:36.383 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:36.383 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:36.383 # We can assume this is a mount from a host where container is running, 00:02:36.383 # so fetch its hostname to easily identify the target swarm worker. 00:02:36.383 container="$(< /etc/hostname) ($agent)" 00:02:36.383 else 00:02:36.383 # Fallback 00:02:36.383 container=$agent 00:02:36.383 fi 00:02:36.383 fi 00:02:36.383 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:36.383 00:02:36.652 [Pipeline] } 00:02:36.670 [Pipeline] // withEnv 00:02:36.679 [Pipeline] setCustomBuildProperty 00:02:36.693 [Pipeline] stage 00:02:36.696 [Pipeline] { (Tests) 00:02:36.713 [Pipeline] sh 00:02:36.991 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:37.264 [Pipeline] sh 00:02:37.542 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:37.813 [Pipeline] timeout 00:02:37.814 Timeout set to expire in 30 min 00:02:37.815 [Pipeline] { 00:02:37.831 [Pipeline] sh 00:02:38.108 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:38.673 HEAD is now at f7b31b2b9 log: declare g_deprecation_epoch static 00:02:38.684 [Pipeline] sh 00:02:38.958 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:39.230 [Pipeline] sh 00:02:39.511 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:39.783 [Pipeline] sh 00:02:40.059 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:02:40.317 ++ readlink -f spdk_repo 00:02:40.317 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:40.317 + [[ -n /home/vagrant/spdk_repo ]] 00:02:40.317 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:40.317 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:40.317 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:40.317 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:40.317 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:40.317 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:02:40.317 + cd /home/vagrant/spdk_repo 00:02:40.317 + source /etc/os-release 00:02:40.317 ++ NAME='Fedora Linux' 00:02:40.317 ++ VERSION='38 (Cloud Edition)' 00:02:40.317 ++ ID=fedora 00:02:40.317 ++ VERSION_ID=38 00:02:40.317 ++ VERSION_CODENAME= 00:02:40.317 ++ PLATFORM_ID=platform:f38 00:02:40.317 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:40.317 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:40.317 ++ LOGO=fedora-logo-icon 00:02:40.317 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:40.317 ++ HOME_URL=https://fedoraproject.org/ 00:02:40.317 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:40.317 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:40.317 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:40.317 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:40.317 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:40.317 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:40.317 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:40.317 ++ SUPPORT_END=2024-05-14 00:02:40.317 ++ VARIANT='Cloud Edition' 00:02:40.317 ++ VARIANT_ID=cloud 00:02:40.317 + uname -a 00:02:40.317 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:40.317 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:40.576 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:40.834 Hugepages 00:02:40.834 node hugesize free / total 00:02:40.834 node0 1048576kB 0 / 0 00:02:40.834 node0 2048kB 0 / 0 00:02:40.834 00:02:40.834 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:40.834 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:40.834 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:40.834 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:40.834 + rm -f /tmp/spdk-ld-path 00:02:40.834 + source autorun-spdk.conf 00:02:40.834 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:40.834 ++ SPDK_TEST_NVMF=1 00:02:40.834 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:40.834 ++ SPDK_TEST_URING=1 00:02:40.834 ++ SPDK_TEST_VFIOUSER=1 00:02:40.834 ++ SPDK_TEST_USDT=1 00:02:40.834 ++ SPDK_RUN_ASAN=1 00:02:40.834 ++ SPDK_RUN_UBSAN=1 00:02:40.834 ++ NET_TYPE=virt 00:02:40.834 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:40.834 ++ RUN_NIGHTLY=1 00:02:40.834 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:40.834 + [[ -n '' ]] 00:02:40.834 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:40.834 + for M in /var/spdk/build-*-manifest.txt 00:02:40.834 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:40.834 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:40.834 + for M in /var/spdk/build-*-manifest.txt 00:02:40.834 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:40.834 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:40.834 ++ uname 00:02:40.834 + [[ Linux == \L\i\n\u\x ]] 00:02:40.834 + sudo dmesg -T 00:02:40.834 + sudo dmesg --clear 00:02:40.834 + dmesg_pid=5167 00:02:40.834 + [[ Fedora Linux == FreeBSD ]] 00:02:40.834 + sudo dmesg -Tw 00:02:40.834 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:40.834 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:40.834 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:40.834 + [[ -x /usr/src/fio-static/fio ]] 00:02:40.834 + export FIO_BIN=/usr/src/fio-static/fio 00:02:40.834 + FIO_BIN=/usr/src/fio-static/fio 00:02:40.834 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:40.834 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:40.834 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:40.834 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:40.834 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:40.834 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:40.834 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:40.834 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:40.834 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:40.834 Test configuration: 00:02:40.834 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:40.834 SPDK_TEST_NVMF=1 00:02:40.834 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:40.834 SPDK_TEST_URING=1 00:02:40.834 SPDK_TEST_VFIOUSER=1 00:02:40.834 SPDK_TEST_USDT=1 00:02:40.834 SPDK_RUN_ASAN=1 00:02:40.834 SPDK_RUN_UBSAN=1 00:02:40.834 NET_TYPE=virt 00:02:40.834 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:41.093 RUN_NIGHTLY=1 18:09:52 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:41.093 18:09:52 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:41.093 18:09:52 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:41.093 18:09:52 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:41.093 18:09:52 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:41.093 18:09:52 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:41.093 18:09:52 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:41.093 18:09:52 -- paths/export.sh@5 -- $ export PATH 00:02:41.093 18:09:52 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:41.093 18:09:52 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:41.093 18:09:52 -- common/autobuild_common.sh@447 -- $ date +%s 00:02:41.093 18:09:52 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721671792.XXXXXX 00:02:41.093 18:09:52 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721671792.znew6H 00:02:41.093 18:09:52 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:02:41.093 18:09:52 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:02:41.093 18:09:52 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:41.093 18:09:52 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:41.093 18:09:52 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:41.093 18:09:52 -- common/autobuild_common.sh@463 -- $ get_config_params 00:02:41.093 18:09:52 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:02:41.093 18:09:52 -- common/autotest_common.sh@10 -- $ set +x 00:02:41.093 18:09:52 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-uring' 00:02:41.093 18:09:52 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:02:41.093 18:09:52 -- pm/common@17 -- $ local monitor 00:02:41.093 18:09:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:41.094 18:09:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:41.094 18:09:52 -- pm/common@21 -- $ date +%s 00:02:41.094 18:09:52 -- pm/common@25 -- $ sleep 1 00:02:41.094 18:09:52 -- pm/common@21 -- $ date +%s 00:02:41.094 18:09:52 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721671792 00:02:41.094 18:09:52 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721671792 00:02:41.094 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721671792_collect-vmstat.pm.log 00:02:41.094 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721671792_collect-cpu-load.pm.log 00:02:42.028 18:09:53 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:02:42.028 18:09:53 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:42.028 18:09:53 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:42.028 18:09:53 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:42.028 18:09:53 -- spdk/autobuild.sh@16 -- $ date -u 00:02:42.028 Mon Jul 22 06:09:53 PM UTC 2024 00:02:42.028 18:09:53 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:42.028 v24.09-pre-297-gf7b31b2b9 00:02:42.028 18:09:53 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:42.028 18:09:53 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:42.028 18:09:53 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:42.028 18:09:53 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:42.028 18:09:53 -- common/autotest_common.sh@10 -- $ set +x 00:02:42.028 ************************************ 00:02:42.028 START TEST asan 00:02:42.028 ************************************ 00:02:42.028 using asan 00:02:42.028 18:09:53 asan -- common/autotest_common.sh@1123 -- $ echo 'using asan' 00:02:42.028 00:02:42.028 real 0m0.000s 00:02:42.028 user 0m0.000s 00:02:42.028 sys 0m0.000s 00:02:42.028 18:09:53 asan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:42.028 18:09:53 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:42.028 ************************************ 00:02:42.028 END TEST asan 00:02:42.028 ************************************ 00:02:42.028 18:09:54 -- common/autotest_common.sh@1142 -- $ return 0 00:02:42.028 18:09:54 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:42.028 18:09:54 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:42.028 18:09:54 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:42.028 18:09:54 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:42.028 18:09:54 -- common/autotest_common.sh@10 -- $ set +x 00:02:42.028 ************************************ 00:02:42.028 START TEST ubsan 00:02:42.028 ************************************ 00:02:42.028 using ubsan 00:02:42.028 18:09:54 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:02:42.028 00:02:42.028 real 0m0.000s 00:02:42.028 user 0m0.000s 00:02:42.028 sys 0m0.000s 00:02:42.028 18:09:54 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:42.028 ************************************ 00:02:42.028 END TEST ubsan 00:02:42.028 18:09:54 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:42.028 ************************************ 00:02:42.286 18:09:54 -- common/autotest_common.sh@1142 -- $ return 0 00:02:42.286 18:09:54 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:42.286 18:09:54 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:42.286 18:09:54 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:42.286 18:09:54 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:42.286 18:09:54 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:42.286 18:09:54 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:42.286 18:09:54 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:42.286 18:09:54 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:42.286 18:09:54 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-uring --with-shared 00:02:42.286 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:42.286 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:42.884 Using 'verbs' RDMA provider 00:02:56.024 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:10.932 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:10.933 Creating mk/config.mk...done. 00:03:10.933 Creating mk/cc.flags.mk...done. 00:03:10.933 Type 'make' to build. 00:03:10.933 18:10:20 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:10.933 18:10:20 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:03:10.933 18:10:20 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:10.933 18:10:20 -- common/autotest_common.sh@10 -- $ set +x 00:03:10.933 ************************************ 00:03:10.933 START TEST make 00:03:10.933 ************************************ 00:03:10.933 18:10:20 make -- common/autotest_common.sh@1123 -- $ make -j10 00:03:10.933 make[1]: Nothing to be done for 'all'. 00:03:10.933 The Meson build system 00:03:10.933 Version: 1.3.1 00:03:10.933 Source dir: /home/vagrant/spdk_repo/spdk/libvfio-user 00:03:10.933 Build dir: /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:03:10.933 Build type: native build 00:03:10.933 Project name: libvfio-user 00:03:10.933 Project version: 0.0.1 00:03:10.933 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:10.933 C linker for the host machine: cc ld.bfd 2.39-16 00:03:10.933 Host machine cpu family: x86_64 00:03:10.933 Host machine cpu: x86_64 00:03:10.933 Run-time dependency threads found: YES 00:03:10.933 Library dl found: YES 00:03:10.933 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:10.933 Run-time dependency json-c found: YES 0.17 00:03:10.933 Run-time dependency cmocka found: YES 1.1.7 00:03:10.933 Program pytest-3 found: NO 00:03:10.933 Program flake8 found: NO 00:03:10.933 Program misspell-fixer found: NO 00:03:10.933 Program restructuredtext-lint found: NO 00:03:10.933 Program valgrind found: YES (/usr/bin/valgrind) 00:03:10.933 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:10.933 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:10.933 Compiler for C supports arguments -Wwrite-strings: YES 00:03:10.933 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:10.933 Program test-lspci.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-lspci.sh) 00:03:10.933 Program test-linkage.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-linkage.sh) 00:03:10.933 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:10.933 Build targets in project: 8 00:03:10.933 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:10.933 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:10.933 00:03:10.933 libvfio-user 0.0.1 00:03:10.933 00:03:10.933 User defined options 00:03:10.933 buildtype : debug 00:03:10.933 default_library: shared 00:03:10.933 libdir : /usr/local/lib 00:03:10.933 00:03:10.933 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:10.933 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:03:10.933 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:10.933 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:10.933 [3/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:10.933 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:11.191 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:11.191 [6/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:11.191 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:11.191 [8/37] Compiling C object samples/null.p/null.c.o 00:03:11.191 [9/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:11.191 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:11.191 [11/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:11.191 [12/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:11.191 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:11.191 [14/37] Compiling C object samples/server.p/server.c.o 00:03:11.191 [15/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:11.191 [16/37] Compiling C object samples/client.p/client.c.o 00:03:11.191 [17/37] Linking target samples/client 00:03:11.191 [18/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:11.456 [19/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:11.456 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:11.456 [21/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:11.456 [22/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:11.456 [23/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:11.456 [24/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:11.456 [25/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:11.456 [26/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:11.456 [27/37] Linking target lib/libvfio-user.so.0.0.1 00:03:11.456 [28/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:11.456 [29/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:11.456 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:11.456 [31/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:11.456 [32/37] Linking target test/unit_tests 00:03:11.456 [33/37] Linking target samples/null 00:03:11.456 [34/37] Linking target samples/gpio-pci-idio-16 00:03:11.717 [35/37] Linking target samples/server 00:03:11.717 [36/37] Linking target samples/lspci 00:03:11.717 [37/37] Linking target samples/shadow_ioeventfd_server 00:03:11.717 INFO: autodetecting backend as ninja 00:03:11.717 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:03:11.717 DESTDIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user meson install --quiet -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:03:11.975 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:03:11.975 ninja: no work to do. 00:03:21.948 The Meson build system 00:03:21.948 Version: 1.3.1 00:03:21.948 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:21.948 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:21.948 Build type: native build 00:03:21.948 Program cat found: YES (/usr/bin/cat) 00:03:21.948 Project name: DPDK 00:03:21.948 Project version: 24.03.0 00:03:21.948 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:21.948 C linker for the host machine: cc ld.bfd 2.39-16 00:03:21.948 Host machine cpu family: x86_64 00:03:21.948 Host machine cpu: x86_64 00:03:21.948 Message: ## Building in Developer Mode ## 00:03:21.948 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:21.948 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:21.948 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:21.948 Program python3 found: YES (/usr/bin/python3) 00:03:21.948 Program cat found: YES (/usr/bin/cat) 00:03:21.948 Compiler for C supports arguments -march=native: YES 00:03:21.948 Checking for size of "void *" : 8 00:03:21.948 Checking for size of "void *" : 8 (cached) 00:03:21.948 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:03:21.948 Library m found: YES 00:03:21.948 Library numa found: YES 00:03:21.948 Has header "numaif.h" : YES 00:03:21.948 Library fdt found: NO 00:03:21.948 Library execinfo found: NO 00:03:21.948 Has header "execinfo.h" : YES 00:03:21.948 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:21.948 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:21.948 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:21.948 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:21.948 Run-time dependency openssl found: YES 3.0.9 00:03:21.948 Run-time dependency libpcap found: YES 1.10.4 00:03:21.948 Has header "pcap.h" with dependency libpcap: YES 00:03:21.948 Compiler for C supports arguments -Wcast-qual: YES 00:03:21.948 Compiler for C supports arguments -Wdeprecated: YES 00:03:21.948 Compiler for C supports arguments -Wformat: YES 00:03:21.948 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:21.948 Compiler for C supports arguments -Wformat-security: NO 00:03:21.948 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:21.948 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:21.948 Compiler for C supports arguments -Wnested-externs: YES 00:03:21.948 Compiler for C supports arguments -Wold-style-definition: YES 00:03:21.948 Compiler for C supports arguments -Wpointer-arith: YES 00:03:21.948 Compiler for C supports arguments -Wsign-compare: YES 00:03:21.948 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:21.948 Compiler for C supports arguments -Wundef: YES 00:03:21.948 Compiler for C supports arguments -Wwrite-strings: YES 00:03:21.948 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:21.948 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:21.948 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:21.948 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:21.948 Program objdump found: YES (/usr/bin/objdump) 00:03:21.948 Compiler for C supports arguments -mavx512f: YES 00:03:21.948 Checking if "AVX512 checking" compiles: YES 00:03:21.948 Fetching value of define "__SSE4_2__" : 1 00:03:21.948 Fetching value of define "__AES__" : 1 00:03:21.948 Fetching value of define "__AVX__" : 1 00:03:21.948 Fetching value of define "__AVX2__" : 1 00:03:21.948 Fetching value of define "__AVX512BW__" : (undefined) 00:03:21.948 Fetching value of define "__AVX512CD__" : (undefined) 00:03:21.948 Fetching value of define "__AVX512DQ__" : (undefined) 00:03:21.948 Fetching value of define "__AVX512F__" : (undefined) 00:03:21.948 Fetching value of define "__AVX512VL__" : (undefined) 00:03:21.948 Fetching value of define "__PCLMUL__" : 1 00:03:21.948 Fetching value of define "__RDRND__" : 1 00:03:21.948 Fetching value of define "__RDSEED__" : 1 00:03:21.948 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:21.948 Fetching value of define "__znver1__" : (undefined) 00:03:21.948 Fetching value of define "__znver2__" : (undefined) 00:03:21.948 Fetching value of define "__znver3__" : (undefined) 00:03:21.948 Fetching value of define "__znver4__" : (undefined) 00:03:21.948 Library asan found: YES 00:03:21.948 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:21.948 Message: lib/log: Defining dependency "log" 00:03:21.948 Message: lib/kvargs: Defining dependency "kvargs" 00:03:21.948 Message: lib/telemetry: Defining dependency "telemetry" 00:03:21.948 Library rt found: YES 00:03:21.948 Checking for function "getentropy" : NO 00:03:21.948 Message: lib/eal: Defining dependency "eal" 00:03:21.948 Message: lib/ring: Defining dependency "ring" 00:03:21.948 Message: lib/rcu: Defining dependency "rcu" 00:03:21.948 Message: lib/mempool: Defining dependency "mempool" 00:03:21.948 Message: lib/mbuf: Defining dependency "mbuf" 00:03:21.948 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:21.948 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:21.948 Compiler for C supports arguments -mpclmul: YES 00:03:21.948 Compiler for C supports arguments -maes: YES 00:03:21.948 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:21.948 Compiler for C supports arguments -mavx512bw: YES 00:03:21.948 Compiler for C supports arguments -mavx512dq: YES 00:03:21.948 Compiler for C supports arguments -mavx512vl: YES 00:03:21.948 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:21.948 Compiler for C supports arguments -mavx2: YES 00:03:21.948 Compiler for C supports arguments -mavx: YES 00:03:21.948 Message: lib/net: Defining dependency "net" 00:03:21.948 Message: lib/meter: Defining dependency "meter" 00:03:21.948 Message: lib/ethdev: Defining dependency "ethdev" 00:03:21.948 Message: lib/pci: Defining dependency "pci" 00:03:21.948 Message: lib/cmdline: Defining dependency "cmdline" 00:03:21.948 Message: lib/hash: Defining dependency "hash" 00:03:21.948 Message: lib/timer: Defining dependency "timer" 00:03:21.948 Message: lib/compressdev: Defining dependency "compressdev" 00:03:21.948 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:21.948 Message: lib/dmadev: Defining dependency "dmadev" 00:03:21.948 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:21.948 Message: lib/power: Defining dependency "power" 00:03:21.948 Message: lib/reorder: Defining dependency "reorder" 00:03:21.948 Message: lib/security: Defining dependency "security" 00:03:21.948 Has header "linux/userfaultfd.h" : YES 00:03:21.948 Has header "linux/vduse.h" : YES 00:03:21.948 Message: lib/vhost: Defining dependency "vhost" 00:03:21.948 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:21.948 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:21.948 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:21.948 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:21.948 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:21.948 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:21.948 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:21.948 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:21.948 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:21.948 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:21.948 Program doxygen found: YES (/usr/bin/doxygen) 00:03:21.949 Configuring doxy-api-html.conf using configuration 00:03:21.949 Configuring doxy-api-man.conf using configuration 00:03:21.949 Program mandb found: YES (/usr/bin/mandb) 00:03:21.949 Program sphinx-build found: NO 00:03:21.949 Configuring rte_build_config.h using configuration 00:03:21.949 Message: 00:03:21.949 ================= 00:03:21.949 Applications Enabled 00:03:21.949 ================= 00:03:21.949 00:03:21.949 apps: 00:03:21.949 00:03:21.949 00:03:21.949 Message: 00:03:21.949 ================= 00:03:21.949 Libraries Enabled 00:03:21.949 ================= 00:03:21.949 00:03:21.949 libs: 00:03:21.949 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:21.949 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:21.949 cryptodev, dmadev, power, reorder, security, vhost, 00:03:21.949 00:03:21.949 Message: 00:03:21.949 =============== 00:03:21.949 Drivers Enabled 00:03:21.949 =============== 00:03:21.949 00:03:21.949 common: 00:03:21.949 00:03:21.949 bus: 00:03:21.949 pci, vdev, 00:03:21.949 mempool: 00:03:21.949 ring, 00:03:21.949 dma: 00:03:21.949 00:03:21.949 net: 00:03:21.949 00:03:21.949 crypto: 00:03:21.949 00:03:21.949 compress: 00:03:21.949 00:03:21.949 vdpa: 00:03:21.949 00:03:21.949 00:03:21.949 Message: 00:03:21.949 ================= 00:03:21.949 Content Skipped 00:03:21.949 ================= 00:03:21.949 00:03:21.949 apps: 00:03:21.949 dumpcap: explicitly disabled via build config 00:03:21.949 graph: explicitly disabled via build config 00:03:21.949 pdump: explicitly disabled via build config 00:03:21.949 proc-info: explicitly disabled via build config 00:03:21.949 test-acl: explicitly disabled via build config 00:03:21.949 test-bbdev: explicitly disabled via build config 00:03:21.949 test-cmdline: explicitly disabled via build config 00:03:21.949 test-compress-perf: explicitly disabled via build config 00:03:21.949 test-crypto-perf: explicitly disabled via build config 00:03:21.949 test-dma-perf: explicitly disabled via build config 00:03:21.949 test-eventdev: explicitly disabled via build config 00:03:21.949 test-fib: explicitly disabled via build config 00:03:21.949 test-flow-perf: explicitly disabled via build config 00:03:21.949 test-gpudev: explicitly disabled via build config 00:03:21.949 test-mldev: explicitly disabled via build config 00:03:21.949 test-pipeline: explicitly disabled via build config 00:03:21.949 test-pmd: explicitly disabled via build config 00:03:21.949 test-regex: explicitly disabled via build config 00:03:21.949 test-sad: explicitly disabled via build config 00:03:21.949 test-security-perf: explicitly disabled via build config 00:03:21.949 00:03:21.949 libs: 00:03:21.949 argparse: explicitly disabled via build config 00:03:21.949 metrics: explicitly disabled via build config 00:03:21.949 acl: explicitly disabled via build config 00:03:21.949 bbdev: explicitly disabled via build config 00:03:21.949 bitratestats: explicitly disabled via build config 00:03:21.949 bpf: explicitly disabled via build config 00:03:21.949 cfgfile: explicitly disabled via build config 00:03:21.949 distributor: explicitly disabled via build config 00:03:21.949 efd: explicitly disabled via build config 00:03:21.949 eventdev: explicitly disabled via build config 00:03:21.949 dispatcher: explicitly disabled via build config 00:03:21.949 gpudev: explicitly disabled via build config 00:03:21.949 gro: explicitly disabled via build config 00:03:21.949 gso: explicitly disabled via build config 00:03:21.949 ip_frag: explicitly disabled via build config 00:03:21.949 jobstats: explicitly disabled via build config 00:03:21.949 latencystats: explicitly disabled via build config 00:03:21.949 lpm: explicitly disabled via build config 00:03:21.949 member: explicitly disabled via build config 00:03:21.949 pcapng: explicitly disabled via build config 00:03:21.949 rawdev: explicitly disabled via build config 00:03:21.949 regexdev: explicitly disabled via build config 00:03:21.949 mldev: explicitly disabled via build config 00:03:21.949 rib: explicitly disabled via build config 00:03:21.949 sched: explicitly disabled via build config 00:03:21.949 stack: explicitly disabled via build config 00:03:21.949 ipsec: explicitly disabled via build config 00:03:21.949 pdcp: explicitly disabled via build config 00:03:21.949 fib: explicitly disabled via build config 00:03:21.949 port: explicitly disabled via build config 00:03:21.949 pdump: explicitly disabled via build config 00:03:21.949 table: explicitly disabled via build config 00:03:21.949 pipeline: explicitly disabled via build config 00:03:21.949 graph: explicitly disabled via build config 00:03:21.949 node: explicitly disabled via build config 00:03:21.949 00:03:21.949 drivers: 00:03:21.949 common/cpt: not in enabled drivers build config 00:03:21.949 common/dpaax: not in enabled drivers build config 00:03:21.949 common/iavf: not in enabled drivers build config 00:03:21.949 common/idpf: not in enabled drivers build config 00:03:21.949 common/ionic: not in enabled drivers build config 00:03:21.949 common/mvep: not in enabled drivers build config 00:03:21.949 common/octeontx: not in enabled drivers build config 00:03:21.949 bus/auxiliary: not in enabled drivers build config 00:03:21.949 bus/cdx: not in enabled drivers build config 00:03:21.949 bus/dpaa: not in enabled drivers build config 00:03:21.949 bus/fslmc: not in enabled drivers build config 00:03:21.949 bus/ifpga: not in enabled drivers build config 00:03:21.949 bus/platform: not in enabled drivers build config 00:03:21.949 bus/uacce: not in enabled drivers build config 00:03:21.949 bus/vmbus: not in enabled drivers build config 00:03:21.949 common/cnxk: not in enabled drivers build config 00:03:21.949 common/mlx5: not in enabled drivers build config 00:03:21.949 common/nfp: not in enabled drivers build config 00:03:21.949 common/nitrox: not in enabled drivers build config 00:03:21.949 common/qat: not in enabled drivers build config 00:03:21.949 common/sfc_efx: not in enabled drivers build config 00:03:21.949 mempool/bucket: not in enabled drivers build config 00:03:21.949 mempool/cnxk: not in enabled drivers build config 00:03:21.949 mempool/dpaa: not in enabled drivers build config 00:03:21.949 mempool/dpaa2: not in enabled drivers build config 00:03:21.949 mempool/octeontx: not in enabled drivers build config 00:03:21.949 mempool/stack: not in enabled drivers build config 00:03:21.949 dma/cnxk: not in enabled drivers build config 00:03:21.949 dma/dpaa: not in enabled drivers build config 00:03:21.949 dma/dpaa2: not in enabled drivers build config 00:03:21.949 dma/hisilicon: not in enabled drivers build config 00:03:21.949 dma/idxd: not in enabled drivers build config 00:03:21.949 dma/ioat: not in enabled drivers build config 00:03:21.949 dma/skeleton: not in enabled drivers build config 00:03:21.949 net/af_packet: not in enabled drivers build config 00:03:21.949 net/af_xdp: not in enabled drivers build config 00:03:21.949 net/ark: not in enabled drivers build config 00:03:21.949 net/atlantic: not in enabled drivers build config 00:03:21.949 net/avp: not in enabled drivers build config 00:03:21.949 net/axgbe: not in enabled drivers build config 00:03:21.949 net/bnx2x: not in enabled drivers build config 00:03:21.949 net/bnxt: not in enabled drivers build config 00:03:21.949 net/bonding: not in enabled drivers build config 00:03:21.949 net/cnxk: not in enabled drivers build config 00:03:21.949 net/cpfl: not in enabled drivers build config 00:03:21.949 net/cxgbe: not in enabled drivers build config 00:03:21.949 net/dpaa: not in enabled drivers build config 00:03:21.949 net/dpaa2: not in enabled drivers build config 00:03:21.949 net/e1000: not in enabled drivers build config 00:03:21.949 net/ena: not in enabled drivers build config 00:03:21.949 net/enetc: not in enabled drivers build config 00:03:21.949 net/enetfec: not in enabled drivers build config 00:03:21.949 net/enic: not in enabled drivers build config 00:03:21.949 net/failsafe: not in enabled drivers build config 00:03:21.949 net/fm10k: not in enabled drivers build config 00:03:21.949 net/gve: not in enabled drivers build config 00:03:21.949 net/hinic: not in enabled drivers build config 00:03:21.949 net/hns3: not in enabled drivers build config 00:03:21.949 net/i40e: not in enabled drivers build config 00:03:21.949 net/iavf: not in enabled drivers build config 00:03:21.949 net/ice: not in enabled drivers build config 00:03:21.949 net/idpf: not in enabled drivers build config 00:03:21.949 net/igc: not in enabled drivers build config 00:03:21.949 net/ionic: not in enabled drivers build config 00:03:21.949 net/ipn3ke: not in enabled drivers build config 00:03:21.949 net/ixgbe: not in enabled drivers build config 00:03:21.949 net/mana: not in enabled drivers build config 00:03:21.949 net/memif: not in enabled drivers build config 00:03:21.949 net/mlx4: not in enabled drivers build config 00:03:21.949 net/mlx5: not in enabled drivers build config 00:03:21.949 net/mvneta: not in enabled drivers build config 00:03:21.949 net/mvpp2: not in enabled drivers build config 00:03:21.949 net/netvsc: not in enabled drivers build config 00:03:21.949 net/nfb: not in enabled drivers build config 00:03:21.949 net/nfp: not in enabled drivers build config 00:03:21.949 net/ngbe: not in enabled drivers build config 00:03:21.949 net/null: not in enabled drivers build config 00:03:21.949 net/octeontx: not in enabled drivers build config 00:03:21.949 net/octeon_ep: not in enabled drivers build config 00:03:21.949 net/pcap: not in enabled drivers build config 00:03:21.949 net/pfe: not in enabled drivers build config 00:03:21.949 net/qede: not in enabled drivers build config 00:03:21.949 net/ring: not in enabled drivers build config 00:03:21.950 net/sfc: not in enabled drivers build config 00:03:21.950 net/softnic: not in enabled drivers build config 00:03:21.950 net/tap: not in enabled drivers build config 00:03:21.950 net/thunderx: not in enabled drivers build config 00:03:21.950 net/txgbe: not in enabled drivers build config 00:03:21.950 net/vdev_netvsc: not in enabled drivers build config 00:03:21.950 net/vhost: not in enabled drivers build config 00:03:21.950 net/virtio: not in enabled drivers build config 00:03:21.950 net/vmxnet3: not in enabled drivers build config 00:03:21.950 raw/*: missing internal dependency, "rawdev" 00:03:21.950 crypto/armv8: not in enabled drivers build config 00:03:21.950 crypto/bcmfs: not in enabled drivers build config 00:03:21.950 crypto/caam_jr: not in enabled drivers build config 00:03:21.950 crypto/ccp: not in enabled drivers build config 00:03:21.950 crypto/cnxk: not in enabled drivers build config 00:03:21.950 crypto/dpaa_sec: not in enabled drivers build config 00:03:21.950 crypto/dpaa2_sec: not in enabled drivers build config 00:03:21.950 crypto/ipsec_mb: not in enabled drivers build config 00:03:21.950 crypto/mlx5: not in enabled drivers build config 00:03:21.950 crypto/mvsam: not in enabled drivers build config 00:03:21.950 crypto/nitrox: not in enabled drivers build config 00:03:21.950 crypto/null: not in enabled drivers build config 00:03:21.950 crypto/octeontx: not in enabled drivers build config 00:03:21.950 crypto/openssl: not in enabled drivers build config 00:03:21.950 crypto/scheduler: not in enabled drivers build config 00:03:21.950 crypto/uadk: not in enabled drivers build config 00:03:21.950 crypto/virtio: not in enabled drivers build config 00:03:21.950 compress/isal: not in enabled drivers build config 00:03:21.950 compress/mlx5: not in enabled drivers build config 00:03:21.950 compress/nitrox: not in enabled drivers build config 00:03:21.950 compress/octeontx: not in enabled drivers build config 00:03:21.950 compress/zlib: not in enabled drivers build config 00:03:21.950 regex/*: missing internal dependency, "regexdev" 00:03:21.950 ml/*: missing internal dependency, "mldev" 00:03:21.950 vdpa/ifc: not in enabled drivers build config 00:03:21.950 vdpa/mlx5: not in enabled drivers build config 00:03:21.950 vdpa/nfp: not in enabled drivers build config 00:03:21.950 vdpa/sfc: not in enabled drivers build config 00:03:21.950 event/*: missing internal dependency, "eventdev" 00:03:21.950 baseband/*: missing internal dependency, "bbdev" 00:03:21.950 gpu/*: missing internal dependency, "gpudev" 00:03:21.950 00:03:21.950 00:03:21.950 Build targets in project: 85 00:03:21.950 00:03:21.950 DPDK 24.03.0 00:03:21.950 00:03:21.950 User defined options 00:03:21.950 buildtype : debug 00:03:21.950 default_library : shared 00:03:21.950 libdir : lib 00:03:21.950 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:21.950 b_sanitize : address 00:03:21.950 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:21.950 c_link_args : 00:03:21.950 cpu_instruction_set: native 00:03:21.950 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:21.950 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:21.950 enable_docs : false 00:03:21.950 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:21.950 enable_kmods : false 00:03:21.950 max_lcores : 128 00:03:21.950 tests : false 00:03:21.950 00:03:21.950 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:22.516 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:22.516 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:22.516 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:22.774 [3/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:22.774 [4/268] Linking static target lib/librte_kvargs.a 00:03:22.774 [5/268] Linking static target lib/librte_log.a 00:03:22.774 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:23.032 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.032 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:23.291 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:23.291 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:23.291 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:23.548 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:23.548 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:23.548 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:23.548 [15/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.549 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:23.806 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:23.806 [18/268] Linking static target lib/librte_telemetry.a 00:03:23.806 [19/268] Linking target lib/librte_log.so.24.1 00:03:23.806 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:24.071 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:24.071 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:24.071 [23/268] Linking target lib/librte_kvargs.so.24.1 00:03:24.071 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:24.071 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:24.071 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:24.329 [27/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:24.587 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:24.587 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:24.587 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:24.587 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:24.587 [32/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.587 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:24.587 [34/268] Linking target lib/librte_telemetry.so.24.1 00:03:24.866 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:24.866 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:25.124 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:25.124 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:25.124 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:25.124 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:25.124 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:25.382 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:25.382 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:25.382 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:25.382 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:25.382 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:25.639 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:25.897 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:25.897 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:25.897 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:25.897 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:26.154 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:26.412 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:26.412 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:26.412 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:26.412 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:26.412 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:26.412 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:26.669 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:26.669 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:26.669 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:26.927 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:27.184 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:27.184 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:27.184 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:27.184 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:27.442 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:27.442 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:27.442 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:27.700 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:27.700 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:27.700 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:27.958 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:27.958 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:27.958 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:27.958 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:27.958 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:28.216 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:28.216 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:28.216 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:28.216 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:28.473 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:28.473 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:28.473 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:28.473 [85/268] Linking static target lib/librte_eal.a 00:03:28.730 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:28.988 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:28.988 [88/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:28.988 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:28.988 [90/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:29.245 [91/268] Linking static target lib/librte_ring.a 00:03:29.245 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:29.245 [93/268] Linking static target lib/librte_mempool.a 00:03:29.245 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:29.245 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:29.503 [96/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:29.503 [97/268] Linking static target lib/librte_rcu.a 00:03:29.503 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:29.503 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:29.503 [100/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.760 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:29.760 [102/268] Linking static target lib/librte_mbuf.a 00:03:29.760 [103/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:30.017 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:30.017 [105/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.017 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:30.017 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:30.274 [108/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:30.274 [109/268] Linking static target lib/librte_net.a 00:03:30.274 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:30.274 [111/268] Linking static target lib/librte_meter.a 00:03:30.274 [112/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.840 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:30.840 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:30.840 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:30.840 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.840 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.840 [118/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.840 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:31.407 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:31.407 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:31.665 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:31.665 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:31.922 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:31.922 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:31.922 [126/268] Linking static target lib/librte_pci.a 00:03:32.188 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:32.188 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:32.188 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:32.188 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:32.445 [131/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.445 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:32.445 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:32.445 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:32.445 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:32.445 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:32.445 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:32.704 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:32.704 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:32.704 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:32.704 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:32.704 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:32.704 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:32.704 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:32.962 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:33.219 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:33.219 [147/268] Linking static target lib/librte_cmdline.a 00:03:33.219 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:33.477 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:33.734 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:33.734 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:33.734 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:33.734 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:33.734 [154/268] Linking static target lib/librte_timer.a 00:03:33.990 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:34.247 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:34.247 [157/268] Linking static target lib/librte_ethdev.a 00:03:34.247 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:34.247 [159/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:34.247 [160/268] Linking static target lib/librte_compressdev.a 00:03:34.247 [161/268] Linking static target lib/librte_hash.a 00:03:34.503 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:34.503 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.503 [164/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:34.762 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:34.762 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:34.762 [167/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.762 [168/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:35.023 [169/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:35.023 [170/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:35.023 [171/268] Linking static target lib/librte_dmadev.a 00:03:35.023 [172/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.281 [173/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.281 [174/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:35.281 [175/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:35.281 [176/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:35.539 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:35.796 [178/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:35.796 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:35.796 [180/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:35.796 [181/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.796 [182/268] Linking static target lib/librte_cryptodev.a 00:03:35.796 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:35.796 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:36.056 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:36.056 [186/268] Linking static target lib/librte_power.a 00:03:36.313 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:36.313 [188/268] Linking static target lib/librte_reorder.a 00:03:36.572 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:36.572 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:36.572 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:36.572 [192/268] Linking static target lib/librte_security.a 00:03:36.572 [193/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.572 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:36.829 [195/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.829 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:37.087 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.087 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:37.361 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:37.638 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:37.638 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:37.638 [202/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.638 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:37.638 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:37.638 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:37.896 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:38.155 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:38.155 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:38.155 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:38.155 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:38.155 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:38.413 [212/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:38.413 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:38.413 [214/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:38.413 [215/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:38.413 [216/268] Linking static target drivers/librte_bus_pci.a 00:03:38.413 [217/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:38.413 [218/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:38.413 [219/268] Linking static target drivers/librte_bus_vdev.a 00:03:38.413 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:38.413 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:38.676 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.676 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:38.676 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:38.676 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:38.676 [226/268] Linking static target drivers/librte_mempool_ring.a 00:03:38.935 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.193 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.452 [229/268] Linking target lib/librte_eal.so.24.1 00:03:39.452 [230/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:39.710 [231/268] Linking target lib/librte_meter.so.24.1 00:03:39.710 [232/268] Linking target lib/librte_dmadev.so.24.1 00:03:39.710 [233/268] Linking target lib/librte_pci.so.24.1 00:03:39.710 [234/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:39.710 [235/268] Linking target lib/librte_timer.so.24.1 00:03:39.710 [236/268] Linking target lib/librte_ring.so.24.1 00:03:39.710 [237/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:39.710 [238/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:39.710 [239/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:39.710 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:39.710 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:39.710 [242/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:39.710 [243/268] Linking target lib/librte_rcu.so.24.1 00:03:39.710 [244/268] Linking target lib/librte_mempool.so.24.1 00:03:39.968 [245/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:39.968 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:39.968 [247/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:39.968 [248/268] Linking target lib/librte_mbuf.so.24.1 00:03:39.968 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:40.225 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:40.225 [251/268] Linking target lib/librte_reorder.so.24.1 00:03:40.225 [252/268] Linking target lib/librte_net.so.24.1 00:03:40.225 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:03:40.225 [254/268] Linking target lib/librte_compressdev.so.24.1 00:03:40.482 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:40.482 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:40.482 [257/268] Linking target lib/librte_security.so.24.1 00:03:40.482 [258/268] Linking target lib/librte_cmdline.so.24.1 00:03:40.482 [259/268] Linking target lib/librte_hash.so.24.1 00:03:40.741 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:41.305 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.305 [262/268] Linking target lib/librte_ethdev.so.24.1 00:03:41.562 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:41.562 [264/268] Linking target lib/librte_power.so.24.1 00:03:44.091 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:44.091 [266/268] Linking static target lib/librte_vhost.a 00:03:45.989 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:45.989 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:45.989 INFO: autodetecting backend as ninja 00:03:45.989 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:46.922 CC lib/log/log.o 00:03:46.922 CC lib/log/log_flags.o 00:03:46.922 CC lib/log/log_deprecated.o 00:03:46.922 CC lib/ut_mock/mock.o 00:03:46.922 CC lib/ut/ut.o 00:03:47.180 LIB libspdk_log.a 00:03:47.180 LIB libspdk_ut_mock.a 00:03:47.180 LIB libspdk_ut.a 00:03:47.180 SO libspdk_log.so.7.0 00:03:47.180 SO libspdk_ut_mock.so.6.0 00:03:47.180 SO libspdk_ut.so.2.0 00:03:47.180 SYMLINK libspdk_log.so 00:03:47.180 SYMLINK libspdk_ut_mock.so 00:03:47.180 SYMLINK libspdk_ut.so 00:03:47.439 CC lib/ioat/ioat.o 00:03:47.439 CC lib/dma/dma.o 00:03:47.439 CXX lib/trace_parser/trace.o 00:03:47.439 CC lib/util/base64.o 00:03:47.439 CC lib/util/bit_array.o 00:03:47.439 CC lib/util/cpuset.o 00:03:47.439 CC lib/util/crc16.o 00:03:47.439 CC lib/util/crc32.o 00:03:47.439 CC lib/util/crc32c.o 00:03:47.697 CC lib/vfio_user/host/vfio_user_pci.o 00:03:47.697 CC lib/vfio_user/host/vfio_user.o 00:03:47.697 CC lib/util/crc32_ieee.o 00:03:47.697 LIB libspdk_dma.a 00:03:47.697 SO libspdk_dma.so.4.0 00:03:47.697 CC lib/util/crc64.o 00:03:47.697 SYMLINK libspdk_dma.so 00:03:47.697 CC lib/util/dif.o 00:03:47.697 CC lib/util/fd.o 00:03:47.697 CC lib/util/fd_group.o 00:03:47.955 CC lib/util/file.o 00:03:47.955 CC lib/util/hexlify.o 00:03:47.955 CC lib/util/iov.o 00:03:47.955 LIB libspdk_ioat.a 00:03:47.955 CC lib/util/math.o 00:03:47.955 SO libspdk_ioat.so.7.0 00:03:47.955 CC lib/util/net.o 00:03:47.955 SYMLINK libspdk_ioat.so 00:03:47.955 CC lib/util/pipe.o 00:03:47.955 CC lib/util/strerror_tls.o 00:03:47.955 CC lib/util/string.o 00:03:48.213 CC lib/util/uuid.o 00:03:48.213 CC lib/util/xor.o 00:03:48.213 LIB libspdk_vfio_user.a 00:03:48.213 CC lib/util/zipf.o 00:03:48.213 SO libspdk_vfio_user.so.5.0 00:03:48.213 SYMLINK libspdk_vfio_user.so 00:03:48.778 LIB libspdk_util.a 00:03:48.778 SO libspdk_util.so.10.0 00:03:48.778 LIB libspdk_trace_parser.a 00:03:48.778 SO libspdk_trace_parser.so.5.0 00:03:49.036 SYMLINK libspdk_trace_parser.so 00:03:49.036 SYMLINK libspdk_util.so 00:03:49.036 CC lib/rdma_utils/rdma_utils.o 00:03:49.036 CC lib/json/json_parse.o 00:03:49.036 CC lib/conf/conf.o 00:03:49.036 CC lib/json/json_util.o 00:03:49.036 CC lib/vmd/vmd.o 00:03:49.036 CC lib/vmd/led.o 00:03:49.036 CC lib/idxd/idxd.o 00:03:49.036 CC lib/json/json_write.o 00:03:49.036 CC lib/rdma_provider/common.o 00:03:49.036 CC lib/env_dpdk/env.o 00:03:49.294 CC lib/env_dpdk/memory.o 00:03:49.294 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:49.294 LIB libspdk_conf.a 00:03:49.294 CC lib/env_dpdk/pci.o 00:03:49.294 SO libspdk_conf.so.6.0 00:03:49.294 CC lib/env_dpdk/init.o 00:03:49.553 LIB libspdk_rdma_utils.a 00:03:49.553 SO libspdk_rdma_utils.so.1.0 00:03:49.553 SYMLINK libspdk_conf.so 00:03:49.553 LIB libspdk_json.a 00:03:49.553 CC lib/env_dpdk/threads.o 00:03:49.553 SO libspdk_json.so.6.0 00:03:49.553 SYMLINK libspdk_rdma_utils.so 00:03:49.553 CC lib/env_dpdk/pci_ioat.o 00:03:49.553 LIB libspdk_rdma_provider.a 00:03:49.553 SYMLINK libspdk_json.so 00:03:49.553 SO libspdk_rdma_provider.so.6.0 00:03:49.553 CC lib/env_dpdk/pci_virtio.o 00:03:49.553 CC lib/env_dpdk/pci_vmd.o 00:03:49.811 SYMLINK libspdk_rdma_provider.so 00:03:49.811 CC lib/env_dpdk/pci_idxd.o 00:03:49.811 CC lib/env_dpdk/pci_event.o 00:03:49.811 CC lib/env_dpdk/sigbus_handler.o 00:03:49.811 CC lib/env_dpdk/pci_dpdk.o 00:03:49.811 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:49.811 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:49.812 CC lib/idxd/idxd_user.o 00:03:49.812 CC lib/idxd/idxd_kernel.o 00:03:50.069 LIB libspdk_vmd.a 00:03:50.069 CC lib/jsonrpc/jsonrpc_server.o 00:03:50.069 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:50.069 SO libspdk_vmd.so.6.0 00:03:50.069 CC lib/jsonrpc/jsonrpc_client.o 00:03:50.069 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:50.069 SYMLINK libspdk_vmd.so 00:03:50.069 LIB libspdk_idxd.a 00:03:50.327 SO libspdk_idxd.so.12.0 00:03:50.328 SYMLINK libspdk_idxd.so 00:03:50.328 LIB libspdk_jsonrpc.a 00:03:50.328 SO libspdk_jsonrpc.so.6.0 00:03:50.328 SYMLINK libspdk_jsonrpc.so 00:03:50.894 CC lib/rpc/rpc.o 00:03:50.895 LIB libspdk_env_dpdk.a 00:03:51.153 SO libspdk_env_dpdk.so.15.0 00:03:51.153 LIB libspdk_rpc.a 00:03:51.153 SO libspdk_rpc.so.6.0 00:03:51.153 SYMLINK libspdk_rpc.so 00:03:51.153 SYMLINK libspdk_env_dpdk.so 00:03:51.411 CC lib/keyring/keyring.o 00:03:51.411 CC lib/keyring/keyring_rpc.o 00:03:51.411 CC lib/notify/notify.o 00:03:51.411 CC lib/notify/notify_rpc.o 00:03:51.411 CC lib/trace/trace.o 00:03:51.411 CC lib/trace/trace_flags.o 00:03:51.411 CC lib/trace/trace_rpc.o 00:03:51.684 LIB libspdk_notify.a 00:03:51.684 SO libspdk_notify.so.6.0 00:03:51.684 LIB libspdk_keyring.a 00:03:51.684 LIB libspdk_trace.a 00:03:51.684 SYMLINK libspdk_notify.so 00:03:51.684 SO libspdk_keyring.so.1.0 00:03:51.684 SO libspdk_trace.so.10.0 00:03:51.942 SYMLINK libspdk_keyring.so 00:03:51.942 SYMLINK libspdk_trace.so 00:03:52.199 CC lib/sock/sock_rpc.o 00:03:52.200 CC lib/sock/sock.o 00:03:52.200 CC lib/thread/thread.o 00:03:52.200 CC lib/thread/iobuf.o 00:03:52.765 LIB libspdk_sock.a 00:03:52.765 SO libspdk_sock.so.10.0 00:03:52.765 SYMLINK libspdk_sock.so 00:03:53.023 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:53.023 CC lib/nvme/nvme_ctrlr.o 00:03:53.023 CC lib/nvme/nvme_fabric.o 00:03:53.023 CC lib/nvme/nvme_ns_cmd.o 00:03:53.023 CC lib/nvme/nvme_ns.o 00:03:53.023 CC lib/nvme/nvme_pcie.o 00:03:53.023 CC lib/nvme/nvme_pcie_common.o 00:03:53.023 CC lib/nvme/nvme_qpair.o 00:03:53.281 CC lib/nvme/nvme.o 00:03:53.847 CC lib/nvme/nvme_quirks.o 00:03:54.104 CC lib/nvme/nvme_transport.o 00:03:54.104 CC lib/nvme/nvme_discovery.o 00:03:54.104 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:54.104 LIB libspdk_thread.a 00:03:54.104 SO libspdk_thread.so.10.1 00:03:54.362 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:54.362 SYMLINK libspdk_thread.so 00:03:54.362 CC lib/nvme/nvme_tcp.o 00:03:54.362 CC lib/nvme/nvme_opal.o 00:03:54.362 CC lib/nvme/nvme_io_msg.o 00:03:54.362 CC lib/nvme/nvme_poll_group.o 00:03:54.619 CC lib/nvme/nvme_zns.o 00:03:54.876 CC lib/blob/blobstore.o 00:03:54.876 CC lib/accel/accel.o 00:03:54.876 CC lib/accel/accel_rpc.o 00:03:54.876 CC lib/accel/accel_sw.o 00:03:54.876 CC lib/blob/request.o 00:03:55.134 CC lib/blob/zeroes.o 00:03:55.134 CC lib/nvme/nvme_stubs.o 00:03:55.392 CC lib/nvme/nvme_auth.o 00:03:55.392 CC lib/blob/blob_bs_dev.o 00:03:55.392 CC lib/init/json_config.o 00:03:55.392 CC lib/virtio/virtio.o 00:03:55.650 CC lib/vfu_tgt/tgt_endpoint.o 00:03:55.650 CC lib/virtio/virtio_vhost_user.o 00:03:55.650 CC lib/init/subsystem.o 00:03:55.908 CC lib/init/subsystem_rpc.o 00:03:55.908 CC lib/init/rpc.o 00:03:55.908 CC lib/vfu_tgt/tgt_rpc.o 00:03:55.908 CC lib/virtio/virtio_vfio_user.o 00:03:55.908 CC lib/nvme/nvme_cuse.o 00:03:55.908 CC lib/nvme/nvme_vfio_user.o 00:03:55.908 LIB libspdk_init.a 00:03:56.165 SO libspdk_init.so.5.0 00:03:56.165 LIB libspdk_vfu_tgt.a 00:03:56.165 CC lib/virtio/virtio_pci.o 00:03:56.165 SYMLINK libspdk_init.so 00:03:56.165 CC lib/nvme/nvme_rdma.o 00:03:56.165 SO libspdk_vfu_tgt.so.3.0 00:03:56.165 LIB libspdk_accel.a 00:03:56.165 SYMLINK libspdk_vfu_tgt.so 00:03:56.165 SO libspdk_accel.so.16.0 00:03:56.424 SYMLINK libspdk_accel.so 00:03:56.424 CC lib/event/reactor.o 00:03:56.424 CC lib/event/app.o 00:03:56.424 CC lib/event/log_rpc.o 00:03:56.424 CC lib/event/app_rpc.o 00:03:56.424 LIB libspdk_virtio.a 00:03:56.424 CC lib/bdev/bdev.o 00:03:56.424 SO libspdk_virtio.so.7.0 00:03:56.682 CC lib/event/scheduler_static.o 00:03:56.682 SYMLINK libspdk_virtio.so 00:03:56.682 CC lib/bdev/bdev_rpc.o 00:03:56.682 CC lib/bdev/bdev_zone.o 00:03:56.682 CC lib/bdev/part.o 00:03:56.682 CC lib/bdev/scsi_nvme.o 00:03:56.940 LIB libspdk_event.a 00:03:56.940 SO libspdk_event.so.14.0 00:03:57.198 SYMLINK libspdk_event.so 00:03:57.804 LIB libspdk_nvme.a 00:03:58.063 SO libspdk_nvme.so.13.1 00:03:58.321 SYMLINK libspdk_nvme.so 00:03:59.256 LIB libspdk_blob.a 00:03:59.256 SO libspdk_blob.so.11.0 00:03:59.256 SYMLINK libspdk_blob.so 00:03:59.515 CC lib/blobfs/tree.o 00:03:59.515 CC lib/blobfs/blobfs.o 00:03:59.515 CC lib/lvol/lvol.o 00:04:00.100 LIB libspdk_bdev.a 00:04:00.100 SO libspdk_bdev.so.16.0 00:04:00.359 SYMLINK libspdk_bdev.so 00:04:00.616 CC lib/nbd/nbd.o 00:04:00.616 CC lib/nbd/nbd_rpc.o 00:04:00.616 CC lib/scsi/dev.o 00:04:00.616 CC lib/ftl/ftl_core.o 00:04:00.616 CC lib/ftl/ftl_init.o 00:04:00.616 CC lib/scsi/lun.o 00:04:00.616 CC lib/nvmf/ctrlr.o 00:04:00.616 CC lib/ublk/ublk.o 00:04:00.616 LIB libspdk_blobfs.a 00:04:00.873 CC lib/ublk/ublk_rpc.o 00:04:00.873 SO libspdk_blobfs.so.10.0 00:04:00.874 LIB libspdk_lvol.a 00:04:00.874 CC lib/scsi/port.o 00:04:00.874 SO libspdk_lvol.so.10.0 00:04:00.874 CC lib/ftl/ftl_layout.o 00:04:00.874 SYMLINK libspdk_blobfs.so 00:04:00.874 CC lib/nvmf/ctrlr_discovery.o 00:04:00.874 SYMLINK libspdk_lvol.so 00:04:00.874 CC lib/nvmf/ctrlr_bdev.o 00:04:00.874 CC lib/nvmf/subsystem.o 00:04:00.874 CC lib/scsi/scsi.o 00:04:01.138 CC lib/scsi/scsi_bdev.o 00:04:01.138 CC lib/ftl/ftl_debug.o 00:04:01.138 LIB libspdk_nbd.a 00:04:01.138 SO libspdk_nbd.so.7.0 00:04:01.138 CC lib/scsi/scsi_pr.o 00:04:01.138 SYMLINK libspdk_nbd.so 00:04:01.138 CC lib/scsi/scsi_rpc.o 00:04:01.138 CC lib/scsi/task.o 00:04:01.438 CC lib/ftl/ftl_io.o 00:04:01.438 CC lib/nvmf/nvmf.o 00:04:01.438 LIB libspdk_ublk.a 00:04:01.438 SO libspdk_ublk.so.3.0 00:04:01.438 CC lib/nvmf/nvmf_rpc.o 00:04:01.438 CC lib/ftl/ftl_sb.o 00:04:01.695 SYMLINK libspdk_ublk.so 00:04:01.695 CC lib/ftl/ftl_l2p.o 00:04:01.695 CC lib/nvmf/transport.o 00:04:01.695 CC lib/ftl/ftl_l2p_flat.o 00:04:01.695 LIB libspdk_scsi.a 00:04:01.695 SO libspdk_scsi.so.9.0 00:04:01.695 CC lib/nvmf/tcp.o 00:04:01.695 CC lib/nvmf/stubs.o 00:04:01.695 CC lib/ftl/ftl_nv_cache.o 00:04:01.953 SYMLINK libspdk_scsi.so 00:04:01.953 CC lib/ftl/ftl_band.o 00:04:01.953 CC lib/nvmf/mdns_server.o 00:04:02.210 CC lib/nvmf/vfio_user.o 00:04:02.210 CC lib/nvmf/rdma.o 00:04:02.468 CC lib/nvmf/auth.o 00:04:02.468 CC lib/ftl/ftl_band_ops.o 00:04:02.725 CC lib/iscsi/conn.o 00:04:02.725 CC lib/iscsi/init_grp.o 00:04:02.725 CC lib/vhost/vhost.o 00:04:02.983 CC lib/iscsi/iscsi.o 00:04:02.983 CC lib/vhost/vhost_rpc.o 00:04:02.983 CC lib/iscsi/md5.o 00:04:02.983 CC lib/ftl/ftl_writer.o 00:04:03.240 CC lib/iscsi/param.o 00:04:03.499 CC lib/iscsi/portal_grp.o 00:04:03.499 CC lib/ftl/ftl_rq.o 00:04:03.499 CC lib/ftl/ftl_reloc.o 00:04:03.499 CC lib/vhost/vhost_scsi.o 00:04:03.499 CC lib/ftl/ftl_l2p_cache.o 00:04:03.499 CC lib/vhost/vhost_blk.o 00:04:03.756 CC lib/vhost/rte_vhost_user.o 00:04:03.756 CC lib/ftl/ftl_p2l.o 00:04:03.756 CC lib/ftl/mngt/ftl_mngt.o 00:04:04.047 CC lib/iscsi/tgt_node.o 00:04:04.047 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:04.304 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:04.304 CC lib/iscsi/iscsi_subsystem.o 00:04:04.304 CC lib/iscsi/iscsi_rpc.o 00:04:04.304 CC lib/iscsi/task.o 00:04:04.304 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:04.562 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:04.562 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:04.562 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:04.562 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:04.820 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:04.821 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:04.821 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:04.821 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:04.821 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:04.821 CC lib/ftl/utils/ftl_conf.o 00:04:04.821 CC lib/ftl/utils/ftl_md.o 00:04:04.821 LIB libspdk_vhost.a 00:04:04.821 LIB libspdk_iscsi.a 00:04:05.078 SO libspdk_vhost.so.8.0 00:04:05.078 SO libspdk_iscsi.so.8.0 00:04:05.078 CC lib/ftl/utils/ftl_mempool.o 00:04:05.078 CC lib/ftl/utils/ftl_bitmap.o 00:04:05.078 CC lib/ftl/utils/ftl_property.o 00:04:05.078 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:05.078 SYMLINK libspdk_vhost.so 00:04:05.078 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:05.078 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:05.337 LIB libspdk_nvmf.a 00:04:05.337 SYMLINK libspdk_iscsi.so 00:04:05.337 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:05.337 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:05.337 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:05.337 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:05.337 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:05.337 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:05.337 SO libspdk_nvmf.so.19.0 00:04:05.337 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:05.337 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:05.595 CC lib/ftl/base/ftl_base_dev.o 00:04:05.595 CC lib/ftl/base/ftl_base_bdev.o 00:04:05.595 CC lib/ftl/ftl_trace.o 00:04:05.853 SYMLINK libspdk_nvmf.so 00:04:05.853 LIB libspdk_ftl.a 00:04:06.111 SO libspdk_ftl.so.9.0 00:04:06.700 SYMLINK libspdk_ftl.so 00:04:06.959 CC module/vfu_device/vfu_virtio.o 00:04:06.959 CC module/env_dpdk/env_dpdk_rpc.o 00:04:06.959 CC module/sock/posix/posix.o 00:04:06.959 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:06.959 CC module/blob/bdev/blob_bdev.o 00:04:06.959 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:06.959 CC module/sock/uring/uring.o 00:04:06.959 CC module/keyring/file/keyring.o 00:04:06.959 CC module/scheduler/gscheduler/gscheduler.o 00:04:06.959 CC module/accel/error/accel_error.o 00:04:06.959 LIB libspdk_env_dpdk_rpc.a 00:04:07.217 SO libspdk_env_dpdk_rpc.so.6.0 00:04:07.217 CC module/keyring/file/keyring_rpc.o 00:04:07.217 LIB libspdk_scheduler_dpdk_governor.a 00:04:07.217 SYMLINK libspdk_env_dpdk_rpc.so 00:04:07.217 CC module/accel/error/accel_error_rpc.o 00:04:07.217 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:07.217 LIB libspdk_scheduler_gscheduler.a 00:04:07.217 LIB libspdk_scheduler_dynamic.a 00:04:07.217 SO libspdk_scheduler_gscheduler.so.4.0 00:04:07.217 SO libspdk_scheduler_dynamic.so.4.0 00:04:07.217 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:07.217 CC module/vfu_device/vfu_virtio_blk.o 00:04:07.217 LIB libspdk_keyring_file.a 00:04:07.217 LIB libspdk_blob_bdev.a 00:04:07.217 LIB libspdk_accel_error.a 00:04:07.217 SYMLINK libspdk_scheduler_gscheduler.so 00:04:07.217 SYMLINK libspdk_scheduler_dynamic.so 00:04:07.476 SO libspdk_blob_bdev.so.11.0 00:04:07.476 SO libspdk_keyring_file.so.1.0 00:04:07.476 SO libspdk_accel_error.so.2.0 00:04:07.476 SYMLINK libspdk_keyring_file.so 00:04:07.476 SYMLINK libspdk_accel_error.so 00:04:07.476 SYMLINK libspdk_blob_bdev.so 00:04:07.476 CC module/vfu_device/vfu_virtio_scsi.o 00:04:07.476 CC module/vfu_device/vfu_virtio_rpc.o 00:04:07.476 CC module/keyring/linux/keyring.o 00:04:07.476 CC module/accel/ioat/accel_ioat.o 00:04:07.476 CC module/accel/dsa/accel_dsa.o 00:04:07.733 CC module/accel/dsa/accel_dsa_rpc.o 00:04:07.733 CC module/accel/iaa/accel_iaa.o 00:04:07.733 CC module/keyring/linux/keyring_rpc.o 00:04:07.733 CC module/accel/ioat/accel_ioat_rpc.o 00:04:07.733 CC module/accel/iaa/accel_iaa_rpc.o 00:04:07.733 LIB libspdk_keyring_linux.a 00:04:07.990 LIB libspdk_accel_ioat.a 00:04:07.990 SO libspdk_keyring_linux.so.1.0 00:04:07.990 LIB libspdk_accel_dsa.a 00:04:07.990 SO libspdk_accel_ioat.so.6.0 00:04:07.990 LIB libspdk_sock_posix.a 00:04:07.990 LIB libspdk_vfu_device.a 00:04:07.990 SO libspdk_accel_dsa.so.5.0 00:04:07.990 LIB libspdk_accel_iaa.a 00:04:07.990 SO libspdk_sock_posix.so.6.0 00:04:07.990 SYMLINK libspdk_keyring_linux.so 00:04:07.990 SO libspdk_vfu_device.so.3.0 00:04:07.990 SO libspdk_accel_iaa.so.3.0 00:04:07.990 SYMLINK libspdk_accel_ioat.so 00:04:07.990 LIB libspdk_sock_uring.a 00:04:07.990 SYMLINK libspdk_accel_dsa.so 00:04:07.990 SO libspdk_sock_uring.so.5.0 00:04:07.990 CC module/bdev/delay/vbdev_delay.o 00:04:07.990 SYMLINK libspdk_sock_posix.so 00:04:07.990 SYMLINK libspdk_accel_iaa.so 00:04:07.990 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:07.990 SYMLINK libspdk_vfu_device.so 00:04:07.990 CC module/bdev/error/vbdev_error.o 00:04:07.990 CC module/bdev/error/vbdev_error_rpc.o 00:04:07.990 CC module/blobfs/bdev/blobfs_bdev.o 00:04:08.247 SYMLINK libspdk_sock_uring.so 00:04:08.247 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:08.247 CC module/bdev/gpt/gpt.o 00:04:08.247 CC module/bdev/lvol/vbdev_lvol.o 00:04:08.247 CC module/bdev/malloc/bdev_malloc.o 00:04:08.247 CC module/bdev/null/bdev_null.o 00:04:08.247 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:08.247 CC module/bdev/null/bdev_null_rpc.o 00:04:08.247 LIB libspdk_blobfs_bdev.a 00:04:08.247 SO libspdk_blobfs_bdev.so.6.0 00:04:08.504 LIB libspdk_bdev_error.a 00:04:08.504 CC module/bdev/gpt/vbdev_gpt.o 00:04:08.504 SO libspdk_bdev_error.so.6.0 00:04:08.504 CC module/bdev/nvme/bdev_nvme.o 00:04:08.504 SYMLINK libspdk_blobfs_bdev.so 00:04:08.504 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:08.504 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:08.504 SYMLINK libspdk_bdev_error.so 00:04:08.504 CC module/bdev/nvme/nvme_rpc.o 00:04:08.504 LIB libspdk_bdev_delay.a 00:04:08.504 SO libspdk_bdev_delay.so.6.0 00:04:08.504 LIB libspdk_bdev_null.a 00:04:08.761 SO libspdk_bdev_null.so.6.0 00:04:08.761 SYMLINK libspdk_bdev_delay.so 00:04:08.761 LIB libspdk_bdev_malloc.a 00:04:08.761 CC module/bdev/passthru/vbdev_passthru.o 00:04:08.761 SO libspdk_bdev_malloc.so.6.0 00:04:08.761 SYMLINK libspdk_bdev_null.so 00:04:08.761 LIB libspdk_bdev_gpt.a 00:04:08.761 SO libspdk_bdev_gpt.so.6.0 00:04:08.761 SYMLINK libspdk_bdev_malloc.so 00:04:08.761 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:08.761 CC module/bdev/raid/bdev_raid.o 00:04:08.761 CC module/bdev/nvme/bdev_mdns_client.o 00:04:08.761 SYMLINK libspdk_bdev_gpt.so 00:04:09.018 CC module/bdev/split/vbdev_split.o 00:04:09.018 LIB libspdk_bdev_lvol.a 00:04:09.018 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:09.018 SO libspdk_bdev_lvol.so.6.0 00:04:09.018 CC module/bdev/nvme/vbdev_opal.o 00:04:09.018 CC module/bdev/uring/bdev_uring.o 00:04:09.018 LIB libspdk_bdev_passthru.a 00:04:09.018 SYMLINK libspdk_bdev_lvol.so 00:04:09.018 CC module/bdev/uring/bdev_uring_rpc.o 00:04:09.018 SO libspdk_bdev_passthru.so.6.0 00:04:09.275 CC module/bdev/split/vbdev_split_rpc.o 00:04:09.275 SYMLINK libspdk_bdev_passthru.so 00:04:09.275 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:09.275 CC module/bdev/aio/bdev_aio.o 00:04:09.275 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:09.275 CC module/bdev/raid/bdev_raid_rpc.o 00:04:09.275 LIB libspdk_bdev_split.a 00:04:09.532 CC module/bdev/ftl/bdev_ftl.o 00:04:09.532 SO libspdk_bdev_split.so.6.0 00:04:09.532 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:09.532 CC module/bdev/aio/bdev_aio_rpc.o 00:04:09.532 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:09.532 LIB libspdk_bdev_uring.a 00:04:09.532 SYMLINK libspdk_bdev_split.so 00:04:09.532 SO libspdk_bdev_uring.so.6.0 00:04:09.532 CC module/bdev/raid/bdev_raid_sb.o 00:04:09.532 CC module/bdev/raid/raid0.o 00:04:09.798 LIB libspdk_bdev_aio.a 00:04:09.798 SYMLINK libspdk_bdev_uring.so 00:04:09.798 LIB libspdk_bdev_zone_block.a 00:04:09.798 CC module/bdev/raid/raid1.o 00:04:09.798 SO libspdk_bdev_aio.so.6.0 00:04:09.798 SO libspdk_bdev_zone_block.so.6.0 00:04:09.798 CC module/bdev/iscsi/bdev_iscsi.o 00:04:09.798 LIB libspdk_bdev_ftl.a 00:04:09.798 SYMLINK libspdk_bdev_aio.so 00:04:09.798 SYMLINK libspdk_bdev_zone_block.so 00:04:09.798 CC module/bdev/raid/concat.o 00:04:09.798 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:09.798 SO libspdk_bdev_ftl.so.6.0 00:04:09.798 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:10.066 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:10.066 SYMLINK libspdk_bdev_ftl.so 00:04:10.066 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:10.066 LIB libspdk_bdev_raid.a 00:04:10.324 LIB libspdk_bdev_iscsi.a 00:04:10.324 SO libspdk_bdev_raid.so.6.0 00:04:10.324 SO libspdk_bdev_iscsi.so.6.0 00:04:10.324 SYMLINK libspdk_bdev_iscsi.so 00:04:10.324 SYMLINK libspdk_bdev_raid.so 00:04:10.581 LIB libspdk_bdev_virtio.a 00:04:10.581 SO libspdk_bdev_virtio.so.6.0 00:04:10.839 SYMLINK libspdk_bdev_virtio.so 00:04:11.406 LIB libspdk_bdev_nvme.a 00:04:11.406 SO libspdk_bdev_nvme.so.7.0 00:04:11.664 SYMLINK libspdk_bdev_nvme.so 00:04:12.229 CC module/event/subsystems/sock/sock.o 00:04:12.229 CC module/event/subsystems/scheduler/scheduler.o 00:04:12.229 CC module/event/subsystems/keyring/keyring.o 00:04:12.229 CC module/event/subsystems/iobuf/iobuf.o 00:04:12.229 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:12.229 CC module/event/subsystems/vmd/vmd.o 00:04:12.229 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:12.229 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:12.229 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:12.229 LIB libspdk_event_keyring.a 00:04:12.229 LIB libspdk_event_sock.a 00:04:12.229 LIB libspdk_event_vhost_blk.a 00:04:12.229 LIB libspdk_event_scheduler.a 00:04:12.229 LIB libspdk_event_vfu_tgt.a 00:04:12.229 LIB libspdk_event_vmd.a 00:04:12.229 SO libspdk_event_keyring.so.1.0 00:04:12.488 SO libspdk_event_sock.so.5.0 00:04:12.488 SO libspdk_event_scheduler.so.4.0 00:04:12.488 SO libspdk_event_vhost_blk.so.3.0 00:04:12.488 SO libspdk_event_vfu_tgt.so.3.0 00:04:12.488 SYMLINK libspdk_event_keyring.so 00:04:12.488 SO libspdk_event_vmd.so.6.0 00:04:12.488 SYMLINK libspdk_event_scheduler.so 00:04:12.488 SYMLINK libspdk_event_vhost_blk.so 00:04:12.488 LIB libspdk_event_iobuf.a 00:04:12.488 SYMLINK libspdk_event_sock.so 00:04:12.488 SYMLINK libspdk_event_vfu_tgt.so 00:04:12.488 SO libspdk_event_iobuf.so.3.0 00:04:12.488 SYMLINK libspdk_event_vmd.so 00:04:12.488 SYMLINK libspdk_event_iobuf.so 00:04:12.747 CC module/event/subsystems/accel/accel.o 00:04:13.004 LIB libspdk_event_accel.a 00:04:13.004 SO libspdk_event_accel.so.6.0 00:04:13.004 SYMLINK libspdk_event_accel.so 00:04:13.262 CC module/event/subsystems/bdev/bdev.o 00:04:13.520 LIB libspdk_event_bdev.a 00:04:13.520 SO libspdk_event_bdev.so.6.0 00:04:13.520 SYMLINK libspdk_event_bdev.so 00:04:13.778 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:13.778 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:13.778 CC module/event/subsystems/ublk/ublk.o 00:04:13.778 CC module/event/subsystems/scsi/scsi.o 00:04:13.778 CC module/event/subsystems/nbd/nbd.o 00:04:14.035 LIB libspdk_event_ublk.a 00:04:14.035 LIB libspdk_event_scsi.a 00:04:14.035 SO libspdk_event_ublk.so.3.0 00:04:14.035 LIB libspdk_event_nbd.a 00:04:14.035 SO libspdk_event_scsi.so.6.0 00:04:14.035 SO libspdk_event_nbd.so.6.0 00:04:14.035 SYMLINK libspdk_event_ublk.so 00:04:14.035 SYMLINK libspdk_event_scsi.so 00:04:14.293 LIB libspdk_event_nvmf.a 00:04:14.293 SYMLINK libspdk_event_nbd.so 00:04:14.293 SO libspdk_event_nvmf.so.6.0 00:04:14.293 SYMLINK libspdk_event_nvmf.so 00:04:14.293 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:14.293 CC module/event/subsystems/iscsi/iscsi.o 00:04:14.551 LIB libspdk_event_vhost_scsi.a 00:04:14.551 SO libspdk_event_vhost_scsi.so.3.0 00:04:14.551 LIB libspdk_event_iscsi.a 00:04:14.551 SO libspdk_event_iscsi.so.6.0 00:04:14.551 SYMLINK libspdk_event_vhost_scsi.so 00:04:14.808 SYMLINK libspdk_event_iscsi.so 00:04:14.808 SO libspdk.so.6.0 00:04:14.808 SYMLINK libspdk.so 00:04:15.067 TEST_HEADER include/spdk/accel.h 00:04:15.067 CC app/trace_record/trace_record.o 00:04:15.067 CC test/rpc_client/rpc_client_test.o 00:04:15.067 TEST_HEADER include/spdk/accel_module.h 00:04:15.067 TEST_HEADER include/spdk/assert.h 00:04:15.067 TEST_HEADER include/spdk/barrier.h 00:04:15.067 CXX app/trace/trace.o 00:04:15.067 TEST_HEADER include/spdk/base64.h 00:04:15.067 TEST_HEADER include/spdk/bdev.h 00:04:15.067 TEST_HEADER include/spdk/bdev_module.h 00:04:15.067 TEST_HEADER include/spdk/bdev_zone.h 00:04:15.067 TEST_HEADER include/spdk/bit_array.h 00:04:15.067 TEST_HEADER include/spdk/bit_pool.h 00:04:15.067 TEST_HEADER include/spdk/blob_bdev.h 00:04:15.067 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:15.067 TEST_HEADER include/spdk/blobfs.h 00:04:15.067 TEST_HEADER include/spdk/blob.h 00:04:15.067 TEST_HEADER include/spdk/conf.h 00:04:15.067 TEST_HEADER include/spdk/config.h 00:04:15.067 TEST_HEADER include/spdk/cpuset.h 00:04:15.067 TEST_HEADER include/spdk/crc16.h 00:04:15.067 TEST_HEADER include/spdk/crc32.h 00:04:15.067 TEST_HEADER include/spdk/crc64.h 00:04:15.067 TEST_HEADER include/spdk/dif.h 00:04:15.067 TEST_HEADER include/spdk/dma.h 00:04:15.067 TEST_HEADER include/spdk/endian.h 00:04:15.067 TEST_HEADER include/spdk/env_dpdk.h 00:04:15.067 TEST_HEADER include/spdk/env.h 00:04:15.067 CC app/nvmf_tgt/nvmf_main.o 00:04:15.067 TEST_HEADER include/spdk/event.h 00:04:15.067 TEST_HEADER include/spdk/fd_group.h 00:04:15.067 TEST_HEADER include/spdk/fd.h 00:04:15.067 TEST_HEADER include/spdk/file.h 00:04:15.067 TEST_HEADER include/spdk/ftl.h 00:04:15.067 TEST_HEADER include/spdk/gpt_spec.h 00:04:15.067 TEST_HEADER include/spdk/hexlify.h 00:04:15.067 TEST_HEADER include/spdk/histogram_data.h 00:04:15.067 TEST_HEADER include/spdk/idxd.h 00:04:15.067 TEST_HEADER include/spdk/idxd_spec.h 00:04:15.067 TEST_HEADER include/spdk/init.h 00:04:15.067 TEST_HEADER include/spdk/ioat.h 00:04:15.067 TEST_HEADER include/spdk/ioat_spec.h 00:04:15.067 TEST_HEADER include/spdk/iscsi_spec.h 00:04:15.067 TEST_HEADER include/spdk/json.h 00:04:15.067 TEST_HEADER include/spdk/jsonrpc.h 00:04:15.067 CC test/thread/poller_perf/poller_perf.o 00:04:15.067 TEST_HEADER include/spdk/keyring.h 00:04:15.067 TEST_HEADER include/spdk/keyring_module.h 00:04:15.067 CC examples/util/zipf/zipf.o 00:04:15.067 TEST_HEADER include/spdk/likely.h 00:04:15.067 TEST_HEADER include/spdk/log.h 00:04:15.067 TEST_HEADER include/spdk/lvol.h 00:04:15.325 TEST_HEADER include/spdk/memory.h 00:04:15.325 TEST_HEADER include/spdk/mmio.h 00:04:15.325 TEST_HEADER include/spdk/nbd.h 00:04:15.325 TEST_HEADER include/spdk/net.h 00:04:15.325 TEST_HEADER include/spdk/notify.h 00:04:15.325 TEST_HEADER include/spdk/nvme.h 00:04:15.325 TEST_HEADER include/spdk/nvme_intel.h 00:04:15.325 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:15.325 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:15.325 TEST_HEADER include/spdk/nvme_spec.h 00:04:15.325 TEST_HEADER include/spdk/nvme_zns.h 00:04:15.325 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:15.325 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:15.325 TEST_HEADER include/spdk/nvmf.h 00:04:15.325 TEST_HEADER include/spdk/nvmf_spec.h 00:04:15.325 TEST_HEADER include/spdk/nvmf_transport.h 00:04:15.325 TEST_HEADER include/spdk/opal.h 00:04:15.325 TEST_HEADER include/spdk/opal_spec.h 00:04:15.325 TEST_HEADER include/spdk/pci_ids.h 00:04:15.325 TEST_HEADER include/spdk/pipe.h 00:04:15.325 TEST_HEADER include/spdk/queue.h 00:04:15.325 TEST_HEADER include/spdk/reduce.h 00:04:15.325 CC test/dma/test_dma/test_dma.o 00:04:15.325 TEST_HEADER include/spdk/rpc.h 00:04:15.325 CC test/app/bdev_svc/bdev_svc.o 00:04:15.325 TEST_HEADER include/spdk/scheduler.h 00:04:15.325 TEST_HEADER include/spdk/scsi.h 00:04:15.325 TEST_HEADER include/spdk/scsi_spec.h 00:04:15.325 TEST_HEADER include/spdk/sock.h 00:04:15.325 TEST_HEADER include/spdk/stdinc.h 00:04:15.325 TEST_HEADER include/spdk/string.h 00:04:15.325 TEST_HEADER include/spdk/thread.h 00:04:15.325 TEST_HEADER include/spdk/trace.h 00:04:15.325 TEST_HEADER include/spdk/trace_parser.h 00:04:15.325 TEST_HEADER include/spdk/tree.h 00:04:15.325 TEST_HEADER include/spdk/ublk.h 00:04:15.325 CC test/env/mem_callbacks/mem_callbacks.o 00:04:15.325 TEST_HEADER include/spdk/util.h 00:04:15.325 TEST_HEADER include/spdk/uuid.h 00:04:15.325 TEST_HEADER include/spdk/version.h 00:04:15.325 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:15.325 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:15.326 TEST_HEADER include/spdk/vhost.h 00:04:15.326 TEST_HEADER include/spdk/vmd.h 00:04:15.326 TEST_HEADER include/spdk/xor.h 00:04:15.326 TEST_HEADER include/spdk/zipf.h 00:04:15.326 CXX test/cpp_headers/accel.o 00:04:15.326 LINK rpc_client_test 00:04:15.326 LINK poller_perf 00:04:15.326 LINK nvmf_tgt 00:04:15.326 LINK zipf 00:04:15.583 LINK spdk_trace_record 00:04:15.583 CXX test/cpp_headers/accel_module.o 00:04:15.583 CXX test/cpp_headers/assert.o 00:04:15.583 LINK bdev_svc 00:04:15.583 LINK spdk_trace 00:04:15.949 LINK test_dma 00:04:15.949 CXX test/cpp_headers/barrier.o 00:04:15.949 CC app/iscsi_tgt/iscsi_tgt.o 00:04:15.949 CC examples/ioat/perf/perf.o 00:04:15.949 CC test/event/event_perf/event_perf.o 00:04:15.949 CC app/spdk_tgt/spdk_tgt.o 00:04:15.949 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:15.949 CXX test/cpp_headers/base64.o 00:04:15.949 CC test/event/reactor/reactor.o 00:04:15.950 LINK mem_callbacks 00:04:15.950 CC test/event/reactor_perf/reactor_perf.o 00:04:15.950 LINK iscsi_tgt 00:04:15.950 LINK event_perf 00:04:15.950 CC test/app/histogram_perf/histogram_perf.o 00:04:16.207 LINK reactor 00:04:16.207 CXX test/cpp_headers/bdev.o 00:04:16.207 LINK spdk_tgt 00:04:16.207 LINK ioat_perf 00:04:16.207 LINK reactor_perf 00:04:16.207 CC test/env/vtophys/vtophys.o 00:04:16.207 LINK histogram_perf 00:04:16.207 CXX test/cpp_headers/bdev_module.o 00:04:16.465 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:16.465 CC examples/ioat/verify/verify.o 00:04:16.465 LINK vtophys 00:04:16.465 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:16.465 CC test/accel/dif/dif.o 00:04:16.465 LINK nvme_fuzz 00:04:16.465 CC test/event/app_repeat/app_repeat.o 00:04:16.465 CC app/spdk_lspci/spdk_lspci.o 00:04:16.465 CC app/spdk_nvme_perf/perf.o 00:04:16.465 CXX test/cpp_headers/bdev_zone.o 00:04:16.465 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:16.724 LINK verify 00:04:16.724 LINK spdk_lspci 00:04:16.724 LINK app_repeat 00:04:16.724 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:16.724 CC test/app/jsoncat/jsoncat.o 00:04:16.724 CXX test/cpp_headers/bit_array.o 00:04:16.981 LINK env_dpdk_post_init 00:04:16.981 LINK jsoncat 00:04:16.981 CXX test/cpp_headers/bit_pool.o 00:04:16.981 CC test/event/scheduler/scheduler.o 00:04:16.981 LINK dif 00:04:16.981 CC examples/vmd/lsvmd/lsvmd.o 00:04:16.981 LINK vhost_fuzz 00:04:16.981 CC examples/idxd/perf/perf.o 00:04:17.238 CXX test/cpp_headers/blob_bdev.o 00:04:17.238 CC test/env/memory/memory_ut.o 00:04:17.238 CC test/env/pci/pci_ut.o 00:04:17.238 LINK lsvmd 00:04:17.238 CXX test/cpp_headers/blobfs_bdev.o 00:04:17.238 LINK scheduler 00:04:17.496 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:17.496 CXX test/cpp_headers/blobfs.o 00:04:17.496 LINK idxd_perf 00:04:17.496 CC examples/vmd/led/led.o 00:04:17.496 CC test/blobfs/mkfs/mkfs.o 00:04:17.754 LINK interrupt_tgt 00:04:17.754 CXX test/cpp_headers/blob.o 00:04:17.754 LINK pci_ut 00:04:17.754 LINK spdk_nvme_perf 00:04:17.754 LINK led 00:04:17.754 LINK mkfs 00:04:17.754 CXX test/cpp_headers/conf.o 00:04:17.754 CXX test/cpp_headers/config.o 00:04:18.012 CXX test/cpp_headers/cpuset.o 00:04:18.012 CC examples/thread/thread/thread_ex.o 00:04:18.012 CC test/lvol/esnap/esnap.o 00:04:18.012 CC test/app/stub/stub.o 00:04:18.012 CC app/spdk_nvme_identify/identify.o 00:04:18.012 CXX test/cpp_headers/crc16.o 00:04:18.012 CXX test/cpp_headers/crc32.o 00:04:18.012 CC app/spdk_nvme_discover/discovery_aer.o 00:04:18.270 LINK thread 00:04:18.270 LINK stub 00:04:18.270 CXX test/cpp_headers/crc64.o 00:04:18.270 CC examples/sock/hello_world/hello_sock.o 00:04:18.270 LINK spdk_nvme_discover 00:04:18.270 CC app/spdk_top/spdk_top.o 00:04:18.528 CXX test/cpp_headers/dif.o 00:04:18.528 LINK memory_ut 00:04:18.528 LINK hello_sock 00:04:18.528 CXX test/cpp_headers/dma.o 00:04:18.528 CC test/nvme/aer/aer.o 00:04:18.786 LINK iscsi_fuzz 00:04:18.786 CC test/bdev/bdevio/bdevio.o 00:04:18.786 CC examples/accel/perf/accel_perf.o 00:04:18.786 CXX test/cpp_headers/endian.o 00:04:18.786 CC test/nvme/reset/reset.o 00:04:19.044 CC examples/blob/hello_world/hello_blob.o 00:04:19.044 LINK aer 00:04:19.044 CXX test/cpp_headers/env_dpdk.o 00:04:19.044 CC examples/blob/cli/blobcli.o 00:04:19.044 LINK reset 00:04:19.304 LINK bdevio 00:04:19.304 CXX test/cpp_headers/env.o 00:04:19.304 LINK spdk_nvme_identify 00:04:19.304 LINK hello_blob 00:04:19.304 CC test/nvme/sgl/sgl.o 00:04:19.305 LINK accel_perf 00:04:19.305 CXX test/cpp_headers/event.o 00:04:19.562 CC test/nvme/e2edp/nvme_dp.o 00:04:19.562 LINK spdk_top 00:04:19.562 CC app/spdk_dd/spdk_dd.o 00:04:19.562 CC app/vhost/vhost.o 00:04:19.562 CXX test/cpp_headers/fd_group.o 00:04:19.562 LINK sgl 00:04:19.821 CC app/fio/nvme/fio_plugin.o 00:04:19.821 CC examples/nvme/hello_world/hello_world.o 00:04:19.821 LINK blobcli 00:04:19.821 LINK nvme_dp 00:04:19.821 CXX test/cpp_headers/fd.o 00:04:19.821 CC test/nvme/overhead/overhead.o 00:04:19.821 LINK vhost 00:04:19.821 CC examples/nvme/reconnect/reconnect.o 00:04:20.079 LINK hello_world 00:04:20.079 CXX test/cpp_headers/file.o 00:04:20.079 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:20.079 CC examples/nvme/arbitration/arbitration.o 00:04:20.079 LINK spdk_dd 00:04:20.079 CXX test/cpp_headers/ftl.o 00:04:20.079 LINK overhead 00:04:20.337 CC examples/bdev/hello_world/hello_bdev.o 00:04:20.337 CC app/fio/bdev/fio_plugin.o 00:04:20.337 LINK reconnect 00:04:20.337 CXX test/cpp_headers/gpt_spec.o 00:04:20.595 LINK spdk_nvme 00:04:20.595 CC test/nvme/err_injection/err_injection.o 00:04:20.595 LINK arbitration 00:04:20.595 CC examples/bdev/bdevperf/bdevperf.o 00:04:20.595 CXX test/cpp_headers/hexlify.o 00:04:20.595 LINK hello_bdev 00:04:20.595 CC examples/nvme/hotplug/hotplug.o 00:04:20.595 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:20.852 LINK err_injection 00:04:20.852 LINK nvme_manage 00:04:20.852 CC examples/nvme/abort/abort.o 00:04:20.852 CXX test/cpp_headers/histogram_data.o 00:04:20.852 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:20.852 CXX test/cpp_headers/idxd.o 00:04:20.852 LINK hotplug 00:04:20.852 LINK cmb_copy 00:04:21.109 LINK spdk_bdev 00:04:21.109 CC test/nvme/startup/startup.o 00:04:21.109 LINK pmr_persistence 00:04:21.109 CXX test/cpp_headers/idxd_spec.o 00:04:21.109 CXX test/cpp_headers/init.o 00:04:21.366 LINK startup 00:04:21.366 CC test/nvme/reserve/reserve.o 00:04:21.366 CC test/nvme/simple_copy/simple_copy.o 00:04:21.366 CC test/nvme/connect_stress/connect_stress.o 00:04:21.366 CXX test/cpp_headers/ioat.o 00:04:21.366 LINK abort 00:04:21.366 CXX test/cpp_headers/ioat_spec.o 00:04:21.366 LINK connect_stress 00:04:21.623 CC test/nvme/boot_partition/boot_partition.o 00:04:21.623 CC test/nvme/compliance/nvme_compliance.o 00:04:21.623 LINK simple_copy 00:04:21.623 CXX test/cpp_headers/iscsi_spec.o 00:04:21.623 LINK reserve 00:04:21.623 LINK bdevperf 00:04:21.623 CC test/nvme/fused_ordering/fused_ordering.o 00:04:21.623 LINK boot_partition 00:04:21.623 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:21.881 CXX test/cpp_headers/json.o 00:04:21.881 CC test/nvme/fdp/fdp.o 00:04:21.881 CXX test/cpp_headers/jsonrpc.o 00:04:21.881 CC test/nvme/cuse/cuse.o 00:04:21.881 CXX test/cpp_headers/keyring.o 00:04:21.881 LINK fused_ordering 00:04:21.881 CXX test/cpp_headers/keyring_module.o 00:04:21.881 LINK doorbell_aers 00:04:21.881 LINK nvme_compliance 00:04:22.139 CXX test/cpp_headers/likely.o 00:04:22.139 CXX test/cpp_headers/log.o 00:04:22.139 CXX test/cpp_headers/lvol.o 00:04:22.139 CXX test/cpp_headers/memory.o 00:04:22.139 CXX test/cpp_headers/mmio.o 00:04:22.139 CC examples/nvmf/nvmf/nvmf.o 00:04:22.139 CXX test/cpp_headers/nbd.o 00:04:22.139 CXX test/cpp_headers/net.o 00:04:22.139 CXX test/cpp_headers/notify.o 00:04:22.139 LINK fdp 00:04:22.397 CXX test/cpp_headers/nvme.o 00:04:22.397 CXX test/cpp_headers/nvme_intel.o 00:04:22.397 CXX test/cpp_headers/nvme_ocssd.o 00:04:22.397 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:22.397 CXX test/cpp_headers/nvme_spec.o 00:04:22.397 CXX test/cpp_headers/nvme_zns.o 00:04:22.397 CXX test/cpp_headers/nvmf_cmd.o 00:04:22.397 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:22.397 CXX test/cpp_headers/nvmf.o 00:04:22.662 LINK nvmf 00:04:22.662 CXX test/cpp_headers/nvmf_spec.o 00:04:22.662 CXX test/cpp_headers/nvmf_transport.o 00:04:22.662 CXX test/cpp_headers/opal.o 00:04:22.662 CXX test/cpp_headers/opal_spec.o 00:04:22.662 CXX test/cpp_headers/pci_ids.o 00:04:22.662 CXX test/cpp_headers/pipe.o 00:04:22.662 CXX test/cpp_headers/queue.o 00:04:22.662 CXX test/cpp_headers/reduce.o 00:04:22.662 CXX test/cpp_headers/rpc.o 00:04:22.936 CXX test/cpp_headers/scheduler.o 00:04:22.936 CXX test/cpp_headers/scsi.o 00:04:22.936 CXX test/cpp_headers/scsi_spec.o 00:04:22.936 CXX test/cpp_headers/sock.o 00:04:22.936 CXX test/cpp_headers/stdinc.o 00:04:22.936 CXX test/cpp_headers/string.o 00:04:22.936 CXX test/cpp_headers/thread.o 00:04:22.936 CXX test/cpp_headers/trace.o 00:04:22.936 CXX test/cpp_headers/trace_parser.o 00:04:22.936 CXX test/cpp_headers/tree.o 00:04:22.936 CXX test/cpp_headers/ublk.o 00:04:22.936 CXX test/cpp_headers/util.o 00:04:22.936 CXX test/cpp_headers/uuid.o 00:04:22.936 CXX test/cpp_headers/version.o 00:04:23.193 CXX test/cpp_headers/vfio_user_pci.o 00:04:23.193 CXX test/cpp_headers/vfio_user_spec.o 00:04:23.193 CXX test/cpp_headers/vhost.o 00:04:23.193 CXX test/cpp_headers/vmd.o 00:04:23.193 CXX test/cpp_headers/xor.o 00:04:23.193 CXX test/cpp_headers/zipf.o 00:04:23.450 LINK cuse 00:04:25.352 LINK esnap 00:04:25.610 00:04:25.610 real 1m16.641s 00:04:25.610 user 7m32.124s 00:04:25.610 sys 1m41.010s 00:04:25.610 18:11:37 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:04:25.610 ************************************ 00:04:25.610 END TEST make 00:04:25.610 ************************************ 00:04:25.610 18:11:37 make -- common/autotest_common.sh@10 -- $ set +x 00:04:25.610 18:11:37 -- common/autotest_common.sh@1142 -- $ return 0 00:04:25.610 18:11:37 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:25.610 18:11:37 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:25.610 18:11:37 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:25.610 18:11:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:25.610 18:11:37 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:25.610 18:11:37 -- pm/common@44 -- $ pid=5204 00:04:25.610 18:11:37 -- pm/common@50 -- $ kill -TERM 5204 00:04:25.610 18:11:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:25.610 18:11:37 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:25.610 18:11:37 -- pm/common@44 -- $ pid=5206 00:04:25.610 18:11:37 -- pm/common@50 -- $ kill -TERM 5206 00:04:25.912 18:11:37 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:25.912 18:11:37 -- nvmf/common.sh@7 -- # uname -s 00:04:25.912 18:11:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:25.912 18:11:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:25.912 18:11:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:25.912 18:11:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:25.912 18:11:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:25.912 18:11:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:25.912 18:11:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:25.912 18:11:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:25.912 18:11:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:25.912 18:11:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:25.912 18:11:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:04:25.912 18:11:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=1e224894-a0fc-4112-b81b-a37606f50c96 00:04:25.912 18:11:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:25.912 18:11:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:25.912 18:11:37 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:25.912 18:11:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:25.912 18:11:37 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:25.912 18:11:37 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:25.912 18:11:37 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:25.912 18:11:37 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:25.913 18:11:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:25.913 18:11:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:25.913 18:11:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:25.913 18:11:37 -- paths/export.sh@5 -- # export PATH 00:04:25.913 18:11:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:25.913 18:11:37 -- nvmf/common.sh@47 -- # : 0 00:04:25.913 18:11:37 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:25.913 18:11:37 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:25.913 18:11:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:25.913 18:11:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:25.913 18:11:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:25.913 18:11:37 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:25.913 18:11:37 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:25.913 18:11:37 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:25.913 18:11:37 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:25.913 18:11:37 -- spdk/autotest.sh@32 -- # uname -s 00:04:25.913 18:11:37 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:25.913 18:11:37 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:25.913 18:11:37 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:25.913 18:11:37 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:25.913 18:11:37 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:25.913 18:11:37 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:25.913 18:11:37 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:25.913 18:11:37 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:25.913 18:11:37 -- spdk/autotest.sh@48 -- # udevadm_pid=53490 00:04:25.913 18:11:37 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:25.913 18:11:37 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:25.913 18:11:37 -- pm/common@17 -- # local monitor 00:04:25.913 18:11:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:25.913 18:11:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:25.913 18:11:37 -- pm/common@25 -- # sleep 1 00:04:25.913 18:11:37 -- pm/common@21 -- # date +%s 00:04:25.913 18:11:37 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721671897 00:04:25.913 18:11:37 -- pm/common@21 -- # date +%s 00:04:25.913 18:11:37 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721671897 00:04:25.913 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721671897_collect-vmstat.pm.log 00:04:25.913 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721671897_collect-cpu-load.pm.log 00:04:26.870 18:11:38 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:26.870 18:11:38 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:26.870 18:11:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:26.870 18:11:38 -- common/autotest_common.sh@10 -- # set +x 00:04:26.871 18:11:38 -- spdk/autotest.sh@59 -- # create_test_list 00:04:26.871 18:11:38 -- common/autotest_common.sh@746 -- # xtrace_disable 00:04:26.871 18:11:38 -- common/autotest_common.sh@10 -- # set +x 00:04:26.871 18:11:38 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:26.871 18:11:38 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:26.871 18:11:38 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:26.871 18:11:38 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:26.871 18:11:38 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:26.871 18:11:38 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:26.871 18:11:38 -- common/autotest_common.sh@1455 -- # uname 00:04:26.871 18:11:38 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:26.871 18:11:38 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:26.871 18:11:38 -- common/autotest_common.sh@1475 -- # uname 00:04:26.871 18:11:38 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:26.871 18:11:38 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:26.871 18:11:38 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:26.871 18:11:38 -- spdk/autotest.sh@72 -- # hash lcov 00:04:26.871 18:11:38 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:26.871 18:11:38 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:26.871 --rc lcov_branch_coverage=1 00:04:26.871 --rc lcov_function_coverage=1 00:04:26.871 --rc genhtml_branch_coverage=1 00:04:26.871 --rc genhtml_function_coverage=1 00:04:26.871 --rc genhtml_legend=1 00:04:26.871 --rc geninfo_all_blocks=1 00:04:26.871 ' 00:04:26.871 18:11:38 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:26.871 --rc lcov_branch_coverage=1 00:04:26.871 --rc lcov_function_coverage=1 00:04:26.871 --rc genhtml_branch_coverage=1 00:04:26.871 --rc genhtml_function_coverage=1 00:04:26.871 --rc genhtml_legend=1 00:04:26.871 --rc geninfo_all_blocks=1 00:04:26.871 ' 00:04:26.871 18:11:38 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:26.871 --rc lcov_branch_coverage=1 00:04:26.871 --rc lcov_function_coverage=1 00:04:26.871 --rc genhtml_branch_coverage=1 00:04:26.871 --rc genhtml_function_coverage=1 00:04:26.871 --rc genhtml_legend=1 00:04:26.871 --rc geninfo_all_blocks=1 00:04:26.871 --no-external' 00:04:26.871 18:11:38 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:26.871 --rc lcov_branch_coverage=1 00:04:26.871 --rc lcov_function_coverage=1 00:04:26.871 --rc genhtml_branch_coverage=1 00:04:26.871 --rc genhtml_function_coverage=1 00:04:26.871 --rc genhtml_legend=1 00:04:26.871 --rc geninfo_all_blocks=1 00:04:26.871 --no-external' 00:04:26.871 18:11:38 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:27.129 lcov: LCOV version 1.14 00:04:27.129 18:11:38 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:45.246 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:45.246 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:00.125 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:00.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:05:00.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:00.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:05:00.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:00.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:05:00.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:00.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:05:00.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:00.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:05:00.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:00.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:05:00.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:00.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:05:00.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:00.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:05:00.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:00.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:05:00.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:00.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:05:00.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:00.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:05:00.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:00.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:00.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:00.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:05:00.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:00.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:05:00.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:00.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:05:00.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:05:00.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:05:00.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:00.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:05:00.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:00.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:05:00.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:00.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:05:00.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:00.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:05:00.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:00.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:05:00.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:00.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:05:00.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:00.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:05:00.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:00.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:05:00.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:05:00.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:05:00.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:05:00.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:05:00.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:00.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:05:00.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:00.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:05:00.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:05:00.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:05:00.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:00.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:05:00.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:00.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:05:00.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:00.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:05:00.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:00.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:05:00.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:00.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:05:00.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:00.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:05:00.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:05:00.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:05:00.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:00.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:05:00.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:00.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:05:00.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:00.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:00.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:05:00.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:05:00.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:00.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:05:00.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:05:00.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:05:00.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:05:00.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:05:00.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:00.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:05:00.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:05:00.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:05:00.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:00.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:05:00.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:00.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:05:00.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:00.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:05:00.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:00.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:05:00.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:05:00.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:05:00.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:00.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:05:00.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:00.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:05:00.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:00.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:05:00.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:00.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:00.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:00.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:00.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:00.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:05:00.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:00.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:05:00.126 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:00.126 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:05:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:05:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:05:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:05:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:05:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:05:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:05:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:05:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:05:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:05:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:05:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:05:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:05:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:05:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:05:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:05:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:05:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:05:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:05:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:05:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:05:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:05:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:05:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:05:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:05:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:05:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:05:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:05:00.127 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:00.127 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:05:02.051 18:12:13 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:05:02.051 18:12:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:02.051 18:12:13 -- common/autotest_common.sh@10 -- # set +x 00:05:02.051 18:12:13 -- spdk/autotest.sh@91 -- # rm -f 00:05:02.051 18:12:13 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:02.616 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:02.874 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:02.874 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:02.874 18:12:14 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:05:02.874 18:12:14 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:02.874 18:12:14 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:02.874 18:12:14 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:02.874 18:12:14 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:02.874 18:12:14 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:02.874 18:12:14 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:02.874 18:12:14 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:02.874 18:12:14 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:02.874 18:12:14 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:02.874 18:12:14 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:05:02.874 18:12:14 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:05:02.874 18:12:14 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:02.874 18:12:14 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:02.874 18:12:14 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:02.874 18:12:14 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:05:02.874 18:12:14 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:05:02.874 18:12:14 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:02.874 18:12:14 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:02.874 18:12:14 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:02.874 18:12:14 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:05:02.874 18:12:14 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:05:02.874 18:12:14 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:02.874 18:12:14 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:02.874 18:12:14 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:05:02.874 18:12:14 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:02.874 18:12:14 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:02.874 18:12:14 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:05:02.874 18:12:14 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:05:02.874 18:12:14 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:02.874 No valid GPT data, bailing 00:05:02.874 18:12:14 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:02.874 18:12:14 -- scripts/common.sh@391 -- # pt= 00:05:02.874 18:12:14 -- scripts/common.sh@392 -- # return 1 00:05:02.874 18:12:14 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:02.874 1+0 records in 00:05:02.874 1+0 records out 00:05:02.874 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00496947 s, 211 MB/s 00:05:02.874 18:12:14 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:02.874 18:12:14 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:02.874 18:12:14 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:05:02.874 18:12:14 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:05:02.874 18:12:14 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:02.874 No valid GPT data, bailing 00:05:02.874 18:12:14 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:02.874 18:12:14 -- scripts/common.sh@391 -- # pt= 00:05:02.874 18:12:14 -- scripts/common.sh@392 -- # return 1 00:05:02.874 18:12:14 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:02.874 1+0 records in 00:05:02.874 1+0 records out 00:05:02.874 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0039025 s, 269 MB/s 00:05:02.874 18:12:14 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:02.875 18:12:14 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:02.875 18:12:14 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:05:02.875 18:12:14 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:05:02.875 18:12:14 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:03.133 No valid GPT data, bailing 00:05:03.133 18:12:14 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:03.133 18:12:14 -- scripts/common.sh@391 -- # pt= 00:05:03.133 18:12:14 -- scripts/common.sh@392 -- # return 1 00:05:03.133 18:12:14 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:03.133 1+0 records in 00:05:03.133 1+0 records out 00:05:03.133 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00505303 s, 208 MB/s 00:05:03.133 18:12:14 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:03.133 18:12:14 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:03.133 18:12:14 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:05:03.133 18:12:14 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:05:03.133 18:12:14 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:03.133 No valid GPT data, bailing 00:05:03.133 18:12:15 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:03.133 18:12:15 -- scripts/common.sh@391 -- # pt= 00:05:03.133 18:12:15 -- scripts/common.sh@392 -- # return 1 00:05:03.133 18:12:15 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:03.133 1+0 records in 00:05:03.133 1+0 records out 00:05:03.133 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00450153 s, 233 MB/s 00:05:03.133 18:12:15 -- spdk/autotest.sh@118 -- # sync 00:05:03.133 18:12:15 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:03.133 18:12:15 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:03.133 18:12:15 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:05.041 18:12:16 -- spdk/autotest.sh@124 -- # uname -s 00:05:05.041 18:12:16 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:05:05.041 18:12:16 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:05.041 18:12:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:05.041 18:12:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.041 18:12:16 -- common/autotest_common.sh@10 -- # set +x 00:05:05.041 ************************************ 00:05:05.041 START TEST setup.sh 00:05:05.041 ************************************ 00:05:05.041 18:12:16 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:05.041 * Looking for test storage... 00:05:05.041 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:05.041 18:12:16 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:05:05.041 18:12:16 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:05.041 18:12:16 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:05.042 18:12:16 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:05.042 18:12:16 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.042 18:12:16 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:05.042 ************************************ 00:05:05.042 START TEST acl 00:05:05.042 ************************************ 00:05:05.042 18:12:17 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:05.300 * Looking for test storage... 00:05:05.300 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:05.300 18:12:17 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:05:05.300 18:12:17 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:05.300 18:12:17 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:05.300 18:12:17 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:05.300 18:12:17 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:05.300 18:12:17 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:05.300 18:12:17 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:05.300 18:12:17 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:05.300 18:12:17 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:05.300 18:12:17 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:05.300 18:12:17 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:05:05.300 18:12:17 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:05:05.300 18:12:17 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:05.300 18:12:17 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:05.300 18:12:17 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:05.300 18:12:17 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:05:05.300 18:12:17 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:05:05.300 18:12:17 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:05.300 18:12:17 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:05.300 18:12:17 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:05.300 18:12:17 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:05:05.300 18:12:17 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:05:05.300 18:12:17 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:05.300 18:12:17 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:05.300 18:12:17 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:05:05.300 18:12:17 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:05:05.300 18:12:17 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:05:05.300 18:12:17 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:05:05.300 18:12:17 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:05:05.300 18:12:17 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:05.300 18:12:17 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:05.866 18:12:17 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:05:05.866 18:12:17 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:05:05.866 18:12:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:05.866 18:12:17 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:05:05.866 18:12:17 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:05:05.866 18:12:17 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:06.432 18:12:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:05:06.432 18:12:18 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:06.432 18:12:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:06.432 Hugepages 00:05:06.432 node hugesize free / total 00:05:06.432 18:12:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:06.432 18:12:18 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:06.432 18:12:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:06.432 00:05:06.432 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:06.432 18:12:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:06.432 18:12:18 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:06.432 18:12:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:06.713 18:12:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:06.713 18:12:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:06.713 18:12:18 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:06.713 18:12:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:06.713 18:12:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:05:06.713 18:12:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:06.713 18:12:18 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:06.713 18:12:18 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:06.713 18:12:18 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:06.713 18:12:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:06.713 18:12:18 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:05:06.713 18:12:18 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:06.713 18:12:18 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:06.713 18:12:18 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:06.713 18:12:18 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:06.713 18:12:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:06.713 18:12:18 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:05:06.713 18:12:18 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:05:06.713 18:12:18 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:06.713 18:12:18 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.713 18:12:18 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:06.713 ************************************ 00:05:06.713 START TEST denied 00:05:06.713 ************************************ 00:05:06.713 18:12:18 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:05:06.713 18:12:18 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:05:06.713 18:12:18 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:05:06.713 18:12:18 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:05:06.713 18:12:18 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:05:06.713 18:12:18 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:07.647 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:05:07.647 18:12:19 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:05:07.647 18:12:19 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:05:07.647 18:12:19 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:05:07.647 18:12:19 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:05:07.647 18:12:19 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:05:07.647 18:12:19 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:07.647 18:12:19 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:07.647 18:12:19 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:05:07.647 18:12:19 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:07.647 18:12:19 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:08.214 00:05:08.214 real 0m1.465s 00:05:08.214 user 0m0.592s 00:05:08.214 sys 0m0.809s 00:05:08.214 18:12:20 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:08.214 18:12:20 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:05:08.214 ************************************ 00:05:08.214 END TEST denied 00:05:08.214 ************************************ 00:05:08.214 18:12:20 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:05:08.214 18:12:20 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:08.214 18:12:20 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:08.214 18:12:20 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.214 18:12:20 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:08.214 ************************************ 00:05:08.214 START TEST allowed 00:05:08.214 ************************************ 00:05:08.214 18:12:20 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:05:08.214 18:12:20 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:05:08.214 18:12:20 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:05:08.214 18:12:20 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:05:08.214 18:12:20 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:05:08.214 18:12:20 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:09.149 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:09.149 18:12:20 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:05:09.149 18:12:20 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:05:09.149 18:12:20 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:05:09.149 18:12:20 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:05:09.149 18:12:20 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:05:09.149 18:12:20 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:09.149 18:12:20 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:09.149 18:12:20 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:05:09.149 18:12:20 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:09.149 18:12:20 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:09.723 00:05:09.724 real 0m1.513s 00:05:09.724 user 0m0.670s 00:05:09.724 sys 0m0.833s 00:05:09.724 18:12:21 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.724 ************************************ 00:05:09.724 18:12:21 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:05:09.724 END TEST allowed 00:05:09.724 ************************************ 00:05:09.724 18:12:21 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:05:09.724 ************************************ 00:05:09.724 END TEST acl 00:05:09.724 ************************************ 00:05:09.724 00:05:09.724 real 0m4.724s 00:05:09.724 user 0m2.064s 00:05:09.724 sys 0m2.594s 00:05:09.724 18:12:21 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.724 18:12:21 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:09.983 18:12:21 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:09.983 18:12:21 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:09.983 18:12:21 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:09.983 18:12:21 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.983 18:12:21 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:09.983 ************************************ 00:05:09.983 START TEST hugepages 00:05:09.983 ************************************ 00:05:09.983 18:12:21 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:09.983 * Looking for test storage... 00:05:09.983 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 5820492 kB' 'MemAvailable: 7407096 kB' 'Buffers: 2436 kB' 'Cached: 1800240 kB' 'SwapCached: 0 kB' 'Active: 435328 kB' 'Inactive: 1472120 kB' 'Active(anon): 115260 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1472120 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 308 kB' 'Writeback: 0 kB' 'AnonPages: 106404 kB' 'Mapped: 48660 kB' 'Shmem: 10488 kB' 'KReclaimable: 62712 kB' 'Slab: 135860 kB' 'SReclaimable: 62712 kB' 'SUnreclaim: 73148 kB' 'KernelStack: 6428 kB' 'PageTables: 4156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 337524 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.983 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.984 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.985 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.985 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.985 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.985 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.985 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.985 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.985 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.985 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.985 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.985 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.985 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.985 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.985 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.985 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.985 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.985 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.985 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.985 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.985 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.985 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.985 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.985 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.985 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.985 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.985 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.985 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:09.985 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:09.985 18:12:21 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:09.985 18:12:21 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:09.985 18:12:21 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:05:09.985 18:12:21 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:05:09.985 18:12:21 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:09.985 18:12:21 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:09.985 18:12:21 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:09.985 18:12:21 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:09.985 18:12:21 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:09.985 18:12:21 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:09.985 18:12:21 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:09.985 18:12:21 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:05:09.985 18:12:21 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:05:09.985 18:12:21 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:09.985 18:12:21 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:09.985 18:12:21 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:09.985 18:12:21 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:09.985 18:12:21 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:05:09.985 18:12:21 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:09.985 18:12:21 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:09.985 18:12:21 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:09.985 18:12:21 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:09.985 18:12:21 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:09.985 18:12:21 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:09.985 18:12:21 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:09.985 18:12:21 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:09.985 18:12:21 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:09.985 18:12:21 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:09.985 18:12:21 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.985 18:12:21 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:09.985 ************************************ 00:05:09.985 START TEST default_setup 00:05:09.985 ************************************ 00:05:09.985 18:12:21 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:05:09.985 18:12:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:09.985 18:12:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:05:09.985 18:12:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:09.985 18:12:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:05:09.985 18:12:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:09.985 18:12:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:05:09.985 18:12:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:09.985 18:12:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:09.985 18:12:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:09.985 18:12:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:09.985 18:12:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:05:09.985 18:12:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:09.985 18:12:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:09.985 18:12:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:09.985 18:12:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:09.985 18:12:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:09.985 18:12:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:09.985 18:12:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:09.985 18:12:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:05:09.985 18:12:21 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:05:09.985 18:12:21 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:05:09.985 18:12:21 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:10.924 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:10.924 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:10.924 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:10.924 18:12:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:10.924 18:12:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:05:10.924 18:12:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:05:10.924 18:12:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:05:10.924 18:12:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:05:10.924 18:12:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:05:10.924 18:12:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:05:10.924 18:12:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:10.924 18:12:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:10.924 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7921872 kB' 'MemAvailable: 9508340 kB' 'Buffers: 2436 kB' 'Cached: 1800232 kB' 'SwapCached: 0 kB' 'Active: 452764 kB' 'Inactive: 1472136 kB' 'Active(anon): 132696 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1472136 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 123560 kB' 'Mapped: 49000 kB' 'Shmem: 10464 kB' 'KReclaimable: 62404 kB' 'Slab: 135432 kB' 'SReclaimable: 62404 kB' 'SUnreclaim: 73028 kB' 'KernelStack: 6496 kB' 'PageTables: 4472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.925 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7922012 kB' 'MemAvailable: 9508440 kB' 'Buffers: 2436 kB' 'Cached: 1800232 kB' 'SwapCached: 0 kB' 'Active: 452468 kB' 'Inactive: 1472136 kB' 'Active(anon): 132400 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1472136 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 123536 kB' 'Mapped: 48752 kB' 'Shmem: 10464 kB' 'KReclaimable: 62324 kB' 'Slab: 135352 kB' 'SReclaimable: 62324 kB' 'SUnreclaim: 73028 kB' 'KernelStack: 6400 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.926 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.927 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7922012 kB' 'MemAvailable: 9508440 kB' 'Buffers: 2436 kB' 'Cached: 1800232 kB' 'SwapCached: 0 kB' 'Active: 452364 kB' 'Inactive: 1472136 kB' 'Active(anon): 132296 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1472136 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 123436 kB' 'Mapped: 48664 kB' 'Shmem: 10464 kB' 'KReclaimable: 62324 kB' 'Slab: 135348 kB' 'SReclaimable: 62324 kB' 'SUnreclaim: 73024 kB' 'KernelStack: 6400 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.928 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.929 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:05:10.930 nr_hugepages=1024 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:10.930 resv_hugepages=0 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:10.930 surplus_hugepages=0 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:10.930 anon_hugepages=0 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.930 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7921760 kB' 'MemAvailable: 9508188 kB' 'Buffers: 2436 kB' 'Cached: 1800232 kB' 'SwapCached: 0 kB' 'Active: 452192 kB' 'Inactive: 1472136 kB' 'Active(anon): 132124 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1472136 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 123260 kB' 'Mapped: 48664 kB' 'Shmem: 10464 kB' 'KReclaimable: 62324 kB' 'Slab: 135344 kB' 'SReclaimable: 62324 kB' 'SUnreclaim: 73020 kB' 'KernelStack: 6384 kB' 'PageTables: 4128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.931 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:10.932 18:12:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7921760 kB' 'MemUsed: 4320216 kB' 'SwapCached: 0 kB' 'Active: 452268 kB' 'Inactive: 1472136 kB' 'Active(anon): 132200 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1472136 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'FilePages: 1802668 kB' 'Mapped: 48664 kB' 'AnonPages: 123324 kB' 'Shmem: 10464 kB' 'KernelStack: 6416 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62324 kB' 'Slab: 135344 kB' 'SReclaimable: 62324 kB' 'SUnreclaim: 73020 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.192 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:11.193 node0=1024 expecting 1024 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:11.193 00:05:11.193 real 0m1.047s 00:05:11.193 user 0m0.505s 00:05:11.193 sys 0m0.480s 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.193 18:12:22 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:05:11.193 ************************************ 00:05:11.193 END TEST default_setup 00:05:11.193 ************************************ 00:05:11.193 18:12:22 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:11.193 18:12:22 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:11.193 18:12:22 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:11.193 18:12:23 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.193 18:12:23 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:11.193 ************************************ 00:05:11.193 START TEST per_node_1G_alloc 00:05:11.193 ************************************ 00:05:11.193 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:05:11.193 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:05:11.193 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:11.193 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:11.193 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:11.193 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:05:11.193 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:11.193 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:11.193 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:11.193 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:11.193 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:11.193 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:11.193 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:11.193 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:11.193 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:11.193 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:11.194 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:11.194 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:11.194 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:11.194 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:11.194 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:11.194 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:11.194 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:11.194 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:05:11.194 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:11.194 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:11.455 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:11.455 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:11.455 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8971760 kB' 'MemAvailable: 10558188 kB' 'Buffers: 2436 kB' 'Cached: 1800232 kB' 'SwapCached: 0 kB' 'Active: 452696 kB' 'Inactive: 1472136 kB' 'Active(anon): 132628 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1472136 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 123760 kB' 'Mapped: 48732 kB' 'Shmem: 10464 kB' 'KReclaimable: 62324 kB' 'Slab: 135392 kB' 'SReclaimable: 62324 kB' 'SUnreclaim: 73068 kB' 'KernelStack: 6488 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.455 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:11.456 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8972280 kB' 'MemAvailable: 10558708 kB' 'Buffers: 2436 kB' 'Cached: 1800232 kB' 'SwapCached: 0 kB' 'Active: 452768 kB' 'Inactive: 1472136 kB' 'Active(anon): 132700 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1472136 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 123568 kB' 'Mapped: 48792 kB' 'Shmem: 10464 kB' 'KReclaimable: 62324 kB' 'Slab: 135392 kB' 'SReclaimable: 62324 kB' 'SUnreclaim: 73068 kB' 'KernelStack: 6456 kB' 'PageTables: 4208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.457 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:11.458 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8972392 kB' 'MemAvailable: 10558820 kB' 'Buffers: 2436 kB' 'Cached: 1800232 kB' 'SwapCached: 0 kB' 'Active: 452624 kB' 'Inactive: 1472136 kB' 'Active(anon): 132556 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1472136 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 123432 kB' 'Mapped: 48664 kB' 'Shmem: 10464 kB' 'KReclaimable: 62324 kB' 'Slab: 135388 kB' 'SReclaimable: 62324 kB' 'SUnreclaim: 73064 kB' 'KernelStack: 6432 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.459 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.460 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:11.722 nr_hugepages=512 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:11.722 resv_hugepages=0 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:11.722 surplus_hugepages=0 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:11.722 anon_hugepages=0 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8972688 kB' 'MemAvailable: 10559116 kB' 'Buffers: 2436 kB' 'Cached: 1800232 kB' 'SwapCached: 0 kB' 'Active: 452396 kB' 'Inactive: 1472136 kB' 'Active(anon): 132328 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1472136 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 123480 kB' 'Mapped: 48664 kB' 'Shmem: 10464 kB' 'KReclaimable: 62324 kB' 'Slab: 135388 kB' 'SReclaimable: 62324 kB' 'SUnreclaim: 73064 kB' 'KernelStack: 6432 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.722 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.723 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8973080 kB' 'MemUsed: 3268896 kB' 'SwapCached: 0 kB' 'Active: 452388 kB' 'Inactive: 1472136 kB' 'Active(anon): 132320 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1472136 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'FilePages: 1802668 kB' 'Mapped: 48664 kB' 'AnonPages: 123480 kB' 'Shmem: 10464 kB' 'KernelStack: 6432 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62324 kB' 'Slab: 135388 kB' 'SReclaimable: 62324 kB' 'SUnreclaim: 73064 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.724 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.725 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.726 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.726 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.726 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.726 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.726 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:11.726 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:11.726 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:11.726 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:11.726 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:11.726 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:11.726 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:11.726 node0=512 expecting 512 00:05:11.726 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:11.726 00:05:11.726 real 0m0.532s 00:05:11.726 user 0m0.284s 00:05:11.726 sys 0m0.284s 00:05:11.726 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.726 18:12:23 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:11.726 ************************************ 00:05:11.726 END TEST per_node_1G_alloc 00:05:11.726 ************************************ 00:05:11.726 18:12:23 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:11.726 18:12:23 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:11.726 18:12:23 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:11.726 18:12:23 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.726 18:12:23 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:11.726 ************************************ 00:05:11.726 START TEST even_2G_alloc 00:05:11.726 ************************************ 00:05:11.726 18:12:23 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:05:11.726 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:11.726 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:11.726 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:11.726 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:11.726 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:11.726 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:11.726 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:11.726 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:11.726 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:11.726 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:11.726 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:11.726 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:11.726 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:11.726 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:11.726 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:11.726 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:11.726 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:11.726 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:11.726 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:11.726 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:11.726 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:11.726 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:05:11.726 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:11.726 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:11.999 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:11.999 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:11.999 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7926020 kB' 'MemAvailable: 9512448 kB' 'Buffers: 2436 kB' 'Cached: 1800232 kB' 'SwapCached: 0 kB' 'Active: 452536 kB' 'Inactive: 1472136 kB' 'Active(anon): 132468 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1472136 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 123632 kB' 'Mapped: 48752 kB' 'Shmem: 10464 kB' 'KReclaimable: 62324 kB' 'Slab: 135336 kB' 'SReclaimable: 62324 kB' 'SUnreclaim: 73012 kB' 'KernelStack: 6456 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.999 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.000 18:12:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.000 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.000 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.001 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7926020 kB' 'MemAvailable: 9512448 kB' 'Buffers: 2436 kB' 'Cached: 1800232 kB' 'SwapCached: 0 kB' 'Active: 452488 kB' 'Inactive: 1472136 kB' 'Active(anon): 132420 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1472136 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 123532 kB' 'Mapped: 48664 kB' 'Shmem: 10464 kB' 'KReclaimable: 62324 kB' 'Slab: 135320 kB' 'SReclaimable: 62324 kB' 'SUnreclaim: 72996 kB' 'KernelStack: 6416 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:05:12.001 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.001 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.001 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.001 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.001 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.001 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.001 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.001 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.001 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.001 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.001 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.001 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.001 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.001 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.001 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.001 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.001 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.001 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.001 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.287 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.288 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7926020 kB' 'MemAvailable: 9512448 kB' 'Buffers: 2436 kB' 'Cached: 1800232 kB' 'SwapCached: 0 kB' 'Active: 452420 kB' 'Inactive: 1472136 kB' 'Active(anon): 132352 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1472136 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 123488 kB' 'Mapped: 48664 kB' 'Shmem: 10464 kB' 'KReclaimable: 62324 kB' 'Slab: 135320 kB' 'SReclaimable: 62324 kB' 'SUnreclaim: 72996 kB' 'KernelStack: 6432 kB' 'PageTables: 4264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.289 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.290 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:12.291 nr_hugepages=1024 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:12.291 resv_hugepages=0 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:12.291 surplus_hugepages=0 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:12.291 anon_hugepages=0 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7926020 kB' 'MemAvailable: 9512448 kB' 'Buffers: 2436 kB' 'Cached: 1800232 kB' 'SwapCached: 0 kB' 'Active: 452328 kB' 'Inactive: 1472136 kB' 'Active(anon): 132260 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1472136 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 123368 kB' 'Mapped: 48664 kB' 'Shmem: 10464 kB' 'KReclaimable: 62324 kB' 'Slab: 135320 kB' 'SReclaimable: 62324 kB' 'SUnreclaim: 72996 kB' 'KernelStack: 6400 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.291 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.292 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7926020 kB' 'MemUsed: 4315956 kB' 'SwapCached: 0 kB' 'Active: 452328 kB' 'Inactive: 1472136 kB' 'Active(anon): 132260 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1472136 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 1802668 kB' 'Mapped: 48664 kB' 'AnonPages: 123416 kB' 'Shmem: 10464 kB' 'KernelStack: 6400 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62324 kB' 'Slab: 135320 kB' 'SReclaimable: 62324 kB' 'SUnreclaim: 72996 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.293 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:12.294 node0=1024 expecting 1024 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:12.294 00:05:12.294 real 0m0.535s 00:05:12.294 user 0m0.275s 00:05:12.294 sys 0m0.293s 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.294 18:12:24 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:12.294 ************************************ 00:05:12.294 END TEST even_2G_alloc 00:05:12.294 ************************************ 00:05:12.294 18:12:24 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:12.294 18:12:24 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:12.294 18:12:24 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:12.294 18:12:24 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.294 18:12:24 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:12.294 ************************************ 00:05:12.294 START TEST odd_alloc 00:05:12.294 ************************************ 00:05:12.294 18:12:24 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:05:12.294 18:12:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:12.294 18:12:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:12.294 18:12:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:12.294 18:12:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:12.294 18:12:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:12.294 18:12:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:12.294 18:12:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:12.294 18:12:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:12.294 18:12:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:12.294 18:12:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:12.294 18:12:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:12.294 18:12:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:12.294 18:12:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:12.294 18:12:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:12.294 18:12:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:12.294 18:12:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:12.294 18:12:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:12.294 18:12:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:12.294 18:12:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:12.294 18:12:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:12.294 18:12:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:12.294 18:12:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:12.294 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:12.294 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:12.553 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:12.553 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:12.553 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:12.816 18:12:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:12.816 18:12:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:05:12.816 18:12:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:12.816 18:12:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:12.816 18:12:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:12.816 18:12:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:12.816 18:12:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:12.816 18:12:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:12.816 18:12:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:12.816 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:12.816 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:12.816 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:12.816 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.816 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.816 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.816 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.816 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.816 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.816 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.816 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.816 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7920176 kB' 'MemAvailable: 9506604 kB' 'Buffers: 2436 kB' 'Cached: 1800232 kB' 'SwapCached: 0 kB' 'Active: 452620 kB' 'Inactive: 1472136 kB' 'Active(anon): 132552 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1472136 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123660 kB' 'Mapped: 48888 kB' 'Shmem: 10464 kB' 'KReclaimable: 62324 kB' 'Slab: 135324 kB' 'SReclaimable: 62324 kB' 'SUnreclaim: 73000 kB' 'KernelStack: 6424 kB' 'PageTables: 4132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 354704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:05:12.816 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.816 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.816 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.816 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.816 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.816 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.816 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.816 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.816 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.816 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.816 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.816 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.816 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.816 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.816 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.817 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7920176 kB' 'MemAvailable: 9506604 kB' 'Buffers: 2436 kB' 'Cached: 1800232 kB' 'SwapCached: 0 kB' 'Active: 452400 kB' 'Inactive: 1472136 kB' 'Active(anon): 132332 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1472136 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123492 kB' 'Mapped: 48664 kB' 'Shmem: 10464 kB' 'KReclaimable: 62324 kB' 'Slab: 135340 kB' 'SReclaimable: 62324 kB' 'SUnreclaim: 73016 kB' 'KernelStack: 6432 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 354704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.818 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.819 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7920176 kB' 'MemAvailable: 9506604 kB' 'Buffers: 2436 kB' 'Cached: 1800232 kB' 'SwapCached: 0 kB' 'Active: 452132 kB' 'Inactive: 1472136 kB' 'Active(anon): 132064 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1472136 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123208 kB' 'Mapped: 48664 kB' 'Shmem: 10464 kB' 'KReclaimable: 62324 kB' 'Slab: 135324 kB' 'SReclaimable: 62324 kB' 'SUnreclaim: 73000 kB' 'KernelStack: 6400 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 354704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.820 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.821 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:12.822 nr_hugepages=1025 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:12.822 resv_hugepages=0 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:12.822 surplus_hugepages=0 00:05:12.822 anon_hugepages=0 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7920792 kB' 'MemAvailable: 9507220 kB' 'Buffers: 2436 kB' 'Cached: 1800232 kB' 'SwapCached: 0 kB' 'Active: 452440 kB' 'Inactive: 1472136 kB' 'Active(anon): 132372 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1472136 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123500 kB' 'Mapped: 48664 kB' 'Shmem: 10464 kB' 'KReclaimable: 62324 kB' 'Slab: 135324 kB' 'SReclaimable: 62324 kB' 'SUnreclaim: 73000 kB' 'KernelStack: 6416 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 354704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.822 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.823 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7920792 kB' 'MemUsed: 4321184 kB' 'SwapCached: 0 kB' 'Active: 452348 kB' 'Inactive: 1472136 kB' 'Active(anon): 132280 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1472136 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 1802668 kB' 'Mapped: 48664 kB' 'AnonPages: 123384 kB' 'Shmem: 10464 kB' 'KernelStack: 6416 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62324 kB' 'Slab: 135324 kB' 'SReclaimable: 62324 kB' 'SUnreclaim: 73000 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.824 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:12.825 node0=1025 expecting 1025 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:12.825 00:05:12.825 real 0m0.532s 00:05:12.825 user 0m0.260s 00:05:12.825 sys 0m0.310s 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.825 18:12:24 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:12.825 ************************************ 00:05:12.825 END TEST odd_alloc 00:05:12.825 ************************************ 00:05:12.825 18:12:24 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:12.825 18:12:24 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:12.825 18:12:24 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:12.825 18:12:24 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.825 18:12:24 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:12.825 ************************************ 00:05:12.825 START TEST custom_alloc 00:05:12.825 ************************************ 00:05:12.825 18:12:24 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:05:12.825 18:12:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:05:12.825 18:12:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:05:12.825 18:12:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:12.825 18:12:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:12.825 18:12:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:12.825 18:12:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:12.825 18:12:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:12.825 18:12:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:12.825 18:12:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:12.825 18:12:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:12.825 18:12:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:12.825 18:12:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:12.825 18:12:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:12.825 18:12:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:12.825 18:12:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:12.825 18:12:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:12.825 18:12:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:12.825 18:12:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:12.825 18:12:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:12.825 18:12:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:12.825 18:12:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:12.825 18:12:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:12.825 18:12:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:12.825 18:12:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:12.825 18:12:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:12.825 18:12:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:12.825 18:12:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:12.825 18:12:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:12.825 18:12:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:12.825 18:12:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:12.825 18:12:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:12.825 18:12:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:12.825 18:12:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:12.825 18:12:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:12.825 18:12:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:12.825 18:12:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:12.825 18:12:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:12.825 18:12:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:12.825 18:12:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:12.825 18:12:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:12.826 18:12:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:12.826 18:12:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:12.826 18:12:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:05:12.826 18:12:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:12.826 18:12:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:13.084 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:13.349 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:13.349 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8971120 kB' 'MemAvailable: 10557548 kB' 'Buffers: 2436 kB' 'Cached: 1800232 kB' 'SwapCached: 0 kB' 'Active: 452856 kB' 'Inactive: 1472136 kB' 'Active(anon): 132788 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1472136 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 123976 kB' 'Mapped: 48848 kB' 'Shmem: 10464 kB' 'KReclaimable: 62324 kB' 'Slab: 135312 kB' 'SReclaimable: 62324 kB' 'SUnreclaim: 72988 kB' 'KernelStack: 6436 kB' 'PageTables: 4352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.349 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.350 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8971120 kB' 'MemAvailable: 10557548 kB' 'Buffers: 2436 kB' 'Cached: 1800232 kB' 'SwapCached: 0 kB' 'Active: 452140 kB' 'Inactive: 1472136 kB' 'Active(anon): 132072 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1472136 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 123476 kB' 'Mapped: 48668 kB' 'Shmem: 10464 kB' 'KReclaimable: 62324 kB' 'Slab: 135316 kB' 'SReclaimable: 62324 kB' 'SUnreclaim: 72992 kB' 'KernelStack: 6432 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.351 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8971120 kB' 'MemAvailable: 10557548 kB' 'Buffers: 2436 kB' 'Cached: 1800232 kB' 'SwapCached: 0 kB' 'Active: 452204 kB' 'Inactive: 1472136 kB' 'Active(anon): 132136 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1472136 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 123508 kB' 'Mapped: 48668 kB' 'Shmem: 10464 kB' 'KReclaimable: 62324 kB' 'Slab: 135312 kB' 'SReclaimable: 62324 kB' 'SUnreclaim: 72988 kB' 'KernelStack: 6416 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.352 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.353 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:13.354 nr_hugepages=512 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:13.354 resv_hugepages=0 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:13.354 surplus_hugepages=0 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:13.354 anon_hugepages=0 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.354 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8971120 kB' 'MemAvailable: 10557548 kB' 'Buffers: 2436 kB' 'Cached: 1800232 kB' 'SwapCached: 0 kB' 'Active: 452080 kB' 'Inactive: 1472136 kB' 'Active(anon): 132012 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1472136 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 123380 kB' 'Mapped: 48668 kB' 'Shmem: 10464 kB' 'KReclaimable: 62324 kB' 'Slab: 135312 kB' 'SReclaimable: 62324 kB' 'SUnreclaim: 72988 kB' 'KernelStack: 6400 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 354704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.355 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8971120 kB' 'MemUsed: 3270856 kB' 'SwapCached: 0 kB' 'Active: 452232 kB' 'Inactive: 1472136 kB' 'Active(anon): 132164 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1472136 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'FilePages: 1802668 kB' 'Mapped: 48668 kB' 'AnonPages: 123532 kB' 'Shmem: 10464 kB' 'KernelStack: 6416 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62324 kB' 'Slab: 135312 kB' 'SReclaimable: 62324 kB' 'SUnreclaim: 72988 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.356 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.357 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.358 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.358 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.358 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.358 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.358 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.358 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.358 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.358 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.358 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.358 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.358 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.358 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.358 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.358 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.358 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.358 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.358 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:13.358 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.358 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.358 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.358 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:13.358 18:12:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:13.358 18:12:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:13.358 18:12:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:13.358 18:12:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:13.358 18:12:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:13.358 node0=512 expecting 512 00:05:13.358 18:12:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:13.358 18:12:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:13.358 00:05:13.358 real 0m0.541s 00:05:13.358 user 0m0.281s 00:05:13.358 sys 0m0.294s 00:05:13.358 18:12:25 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:13.358 18:12:25 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:13.358 ************************************ 00:05:13.358 END TEST custom_alloc 00:05:13.358 ************************************ 00:05:13.358 18:12:25 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:13.358 18:12:25 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:13.358 18:12:25 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:13.358 18:12:25 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.358 18:12:25 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:13.358 ************************************ 00:05:13.358 START TEST no_shrink_alloc 00:05:13.358 ************************************ 00:05:13.358 18:12:25 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:05:13.358 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:13.358 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:13.358 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:13.358 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:13.358 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:13.358 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:13.358 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:13.358 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:13.358 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:13.358 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:13.617 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:13.617 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:13.617 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:13.617 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:13.617 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:13.617 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:13.617 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:13.617 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:13.617 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:13.617 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:13.617 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:13.617 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:13.880 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:13.880 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:13.880 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7923628 kB' 'MemAvailable: 9510060 kB' 'Buffers: 2436 kB' 'Cached: 1800236 kB' 'SwapCached: 0 kB' 'Active: 452520 kB' 'Inactive: 1472140 kB' 'Active(anon): 132452 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1472140 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 123852 kB' 'Mapped: 48788 kB' 'Shmem: 10464 kB' 'KReclaimable: 62324 kB' 'Slab: 135300 kB' 'SReclaimable: 62324 kB' 'SUnreclaim: 72976 kB' 'KernelStack: 6456 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.880 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:13.881 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7923628 kB' 'MemAvailable: 9510060 kB' 'Buffers: 2436 kB' 'Cached: 1800236 kB' 'SwapCached: 0 kB' 'Active: 452768 kB' 'Inactive: 1472140 kB' 'Active(anon): 132700 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1472140 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 123928 kB' 'Mapped: 49188 kB' 'Shmem: 10464 kB' 'KReclaimable: 62324 kB' 'Slab: 135304 kB' 'SReclaimable: 62324 kB' 'SUnreclaim: 72980 kB' 'KernelStack: 6464 kB' 'PageTables: 4376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 357580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.882 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.883 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7923628 kB' 'MemAvailable: 9510060 kB' 'Buffers: 2436 kB' 'Cached: 1800236 kB' 'SwapCached: 0 kB' 'Active: 452364 kB' 'Inactive: 1472140 kB' 'Active(anon): 132296 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1472140 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 123444 kB' 'Mapped: 48668 kB' 'Shmem: 10464 kB' 'KReclaimable: 62324 kB' 'Slab: 135304 kB' 'SReclaimable: 62324 kB' 'SUnreclaim: 72980 kB' 'KernelStack: 6384 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.884 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.885 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.885 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.885 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.885 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.885 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.885 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.885 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.885 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.885 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.885 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.885 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:13.886 nr_hugepages=1024 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:13.886 resv_hugepages=0 00:05:13.886 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:13.887 surplus_hugepages=0 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:13.887 anon_hugepages=0 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7923628 kB' 'MemAvailable: 9510060 kB' 'Buffers: 2436 kB' 'Cached: 1800236 kB' 'SwapCached: 0 kB' 'Active: 452444 kB' 'Inactive: 1472140 kB' 'Active(anon): 132376 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1472140 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 123532 kB' 'Mapped: 48668 kB' 'Shmem: 10464 kB' 'KReclaimable: 62324 kB' 'Slab: 135304 kB' 'SReclaimable: 62324 kB' 'SUnreclaim: 72980 kB' 'KernelStack: 6432 kB' 'PageTables: 4264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.887 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:13.888 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7923628 kB' 'MemUsed: 4318348 kB' 'SwapCached: 0 kB' 'Active: 452388 kB' 'Inactive: 1472140 kB' 'Active(anon): 132320 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1472140 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'FilePages: 1802672 kB' 'Mapped: 48668 kB' 'AnonPages: 123420 kB' 'Shmem: 10464 kB' 'KernelStack: 6416 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62324 kB' 'Slab: 135304 kB' 'SReclaimable: 62324 kB' 'SUnreclaim: 72980 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.889 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.148 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.148 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.148 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.148 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.148 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.148 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.148 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.148 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.148 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.148 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.148 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.148 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.148 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.148 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.148 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.148 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.148 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.148 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.148 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.148 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.148 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.148 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.148 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.149 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.149 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.149 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.149 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.149 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.149 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.149 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.149 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.149 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.149 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.149 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.149 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.149 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.149 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.149 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.149 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.149 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.149 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.149 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.149 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.149 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.149 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.149 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.149 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.149 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.149 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.149 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.149 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.149 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.149 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.149 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.149 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.149 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:14.149 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:14.149 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:14.149 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:14.149 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:14.149 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:14.149 node0=1024 expecting 1024 00:05:14.149 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:14.149 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:14.149 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:14.149 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:14.149 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:14.149 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:14.149 18:12:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:14.412 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:14.412 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:14.412 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:14.412 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7921124 kB' 'MemAvailable: 9507556 kB' 'Buffers: 2436 kB' 'Cached: 1800236 kB' 'SwapCached: 0 kB' 'Active: 452672 kB' 'Inactive: 1472140 kB' 'Active(anon): 132604 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1472140 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 123764 kB' 'Mapped: 48800 kB' 'Shmem: 10464 kB' 'KReclaimable: 62324 kB' 'Slab: 135320 kB' 'SReclaimable: 62324 kB' 'SUnreclaim: 72996 kB' 'KernelStack: 6472 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.412 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.413 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7921124 kB' 'MemAvailable: 9507556 kB' 'Buffers: 2436 kB' 'Cached: 1800236 kB' 'SwapCached: 0 kB' 'Active: 453012 kB' 'Inactive: 1472140 kB' 'Active(anon): 132944 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1472140 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 124092 kB' 'Mapped: 48672 kB' 'Shmem: 10464 kB' 'KReclaimable: 62324 kB' 'Slab: 135320 kB' 'SReclaimable: 62324 kB' 'SUnreclaim: 72996 kB' 'KernelStack: 6464 kB' 'PageTables: 4360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 359404 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.414 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7921124 kB' 'MemAvailable: 9507556 kB' 'Buffers: 2436 kB' 'Cached: 1800236 kB' 'SwapCached: 0 kB' 'Active: 452288 kB' 'Inactive: 1472140 kB' 'Active(anon): 132220 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1472140 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 123340 kB' 'Mapped: 48672 kB' 'Shmem: 10464 kB' 'KReclaimable: 62324 kB' 'Slab: 135312 kB' 'SReclaimable: 62324 kB' 'SUnreclaim: 72988 kB' 'KernelStack: 6384 kB' 'PageTables: 4128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.415 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.416 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:14.417 nr_hugepages=1024 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:14.417 resv_hugepages=0 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:14.417 surplus_hugepages=0 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:14.417 anon_hugepages=0 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:14.417 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7921124 kB' 'MemAvailable: 9507556 kB' 'Buffers: 2436 kB' 'Cached: 1800236 kB' 'SwapCached: 0 kB' 'Active: 452196 kB' 'Inactive: 1472140 kB' 'Active(anon): 132128 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1472140 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 123296 kB' 'Mapped: 48672 kB' 'Shmem: 10464 kB' 'KReclaimable: 62324 kB' 'Slab: 135316 kB' 'SReclaimable: 62324 kB' 'SUnreclaim: 72992 kB' 'KernelStack: 6416 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 354832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 159596 kB' 'DirectMap2M: 5083136 kB' 'DirectMap1G: 9437184 kB' 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.418 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.419 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.420 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7921124 kB' 'MemUsed: 4320852 kB' 'SwapCached: 0 kB' 'Active: 452404 kB' 'Inactive: 1472140 kB' 'Active(anon): 132336 kB' 'Inactive(anon): 0 kB' 'Active(file): 320068 kB' 'Inactive(file): 1472140 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'FilePages: 1802672 kB' 'Mapped: 48672 kB' 'AnonPages: 123504 kB' 'Shmem: 10464 kB' 'KernelStack: 6400 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62324 kB' 'Slab: 135308 kB' 'SReclaimable: 62324 kB' 'SUnreclaim: 72984 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:14.420 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.420 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.420 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.420 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.420 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.420 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.420 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.420 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.420 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.420 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.420 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.420 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.420 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.420 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.420 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.420 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.420 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.420 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.420 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.420 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.420 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.420 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.420 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.420 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.682 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.682 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.682 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.682 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.682 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:14.683 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:14.684 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:14.684 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:14.684 node0=1024 expecting 1024 00:05:14.684 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:14.684 18:12:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:14.684 00:05:14.684 real 0m1.082s 00:05:14.684 user 0m0.555s 00:05:14.684 sys 0m0.598s 00:05:14.684 18:12:26 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.684 18:12:26 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:14.684 ************************************ 00:05:14.684 END TEST no_shrink_alloc 00:05:14.684 ************************************ 00:05:14.684 18:12:26 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:14.684 18:12:26 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:14.684 18:12:26 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:14.684 18:12:26 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:14.684 18:12:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:14.684 18:12:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:14.684 18:12:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:14.684 18:12:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:14.684 18:12:26 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:14.684 18:12:26 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:14.684 00:05:14.684 real 0m4.714s 00:05:14.684 user 0m2.330s 00:05:14.684 sys 0m2.518s 00:05:14.684 18:12:26 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.684 18:12:26 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:14.684 ************************************ 00:05:14.684 END TEST hugepages 00:05:14.684 ************************************ 00:05:14.684 18:12:26 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:14.684 18:12:26 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:14.684 18:12:26 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:14.684 18:12:26 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.684 18:12:26 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:14.684 ************************************ 00:05:14.684 START TEST driver 00:05:14.684 ************************************ 00:05:14.684 18:12:26 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:14.684 * Looking for test storage... 00:05:14.684 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:14.684 18:12:26 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:14.684 18:12:26 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:14.684 18:12:26 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:15.250 18:12:27 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:15.250 18:12:27 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:15.250 18:12:27 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.250 18:12:27 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:15.250 ************************************ 00:05:15.250 START TEST guess_driver 00:05:15.250 ************************************ 00:05:15.250 18:12:27 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:05:15.251 18:12:27 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:15.251 18:12:27 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:15.251 18:12:27 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:15.251 18:12:27 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:15.251 18:12:27 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:15.251 18:12:27 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:15.251 18:12:27 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:15.251 18:12:27 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:15.251 18:12:27 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:15.251 18:12:27 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:15.251 18:12:27 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:05:15.251 18:12:27 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:05:15.251 18:12:27 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:15.251 18:12:27 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:15.251 18:12:27 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:15.251 18:12:27 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:15.251 18:12:27 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:15.251 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:15.251 18:12:27 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:15.251 Looking for driver=uio_pci_generic 00:05:15.251 18:12:27 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:15.251 18:12:27 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:15.251 18:12:27 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:15.251 18:12:27 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:15.251 18:12:27 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:15.251 18:12:27 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:15.251 18:12:27 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:16.187 18:12:27 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:16.187 18:12:27 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:05:16.187 18:12:27 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.187 18:12:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.187 18:12:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:16.187 18:12:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.187 18:12:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:16.187 18:12:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:16.187 18:12:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.187 18:12:28 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:16.187 18:12:28 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:16.187 18:12:28 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:16.187 18:12:28 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:16.753 00:05:16.753 real 0m1.475s 00:05:16.753 user 0m0.569s 00:05:16.753 sys 0m0.931s 00:05:16.753 18:12:28 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.753 ************************************ 00:05:16.753 18:12:28 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:16.753 END TEST guess_driver 00:05:16.753 ************************************ 00:05:16.753 18:12:28 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:05:16.753 ************************************ 00:05:16.753 END TEST driver 00:05:16.753 ************************************ 00:05:16.753 00:05:16.753 real 0m2.205s 00:05:16.753 user 0m0.806s 00:05:16.753 sys 0m1.472s 00:05:16.753 18:12:28 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.753 18:12:28 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:17.042 18:12:28 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:17.042 18:12:28 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:17.042 18:12:28 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:17.042 18:12:28 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.042 18:12:28 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:17.042 ************************************ 00:05:17.042 START TEST devices 00:05:17.042 ************************************ 00:05:17.042 18:12:28 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:17.042 * Looking for test storage... 00:05:17.042 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:17.042 18:12:28 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:17.042 18:12:28 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:17.042 18:12:28 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:17.042 18:12:28 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:17.612 18:12:29 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:17.612 18:12:29 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:17.612 18:12:29 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:17.612 18:12:29 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:17.612 18:12:29 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:17.612 18:12:29 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:17.612 18:12:29 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:17.612 18:12:29 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:17.612 18:12:29 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:17.612 18:12:29 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:17.612 18:12:29 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:05:17.612 18:12:29 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:05:17.612 18:12:29 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:05:17.613 18:12:29 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:17.613 18:12:29 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:17.613 18:12:29 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:05:17.613 18:12:29 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:05:17.613 18:12:29 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:05:17.613 18:12:29 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:17.613 18:12:29 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:17.613 18:12:29 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:05:17.613 18:12:29 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:05:17.613 18:12:29 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:17.613 18:12:29 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:17.613 18:12:29 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:17.613 18:12:29 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:17.613 18:12:29 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:17.613 18:12:29 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:17.613 18:12:29 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:17.613 18:12:29 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:17.613 18:12:29 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:17.613 18:12:29 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:17.613 18:12:29 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:05:17.613 18:12:29 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:17.613 18:12:29 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:17.613 18:12:29 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:17.613 18:12:29 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:17.871 No valid GPT data, bailing 00:05:17.871 18:12:29 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:17.871 18:12:29 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:17.871 18:12:29 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:17.871 18:12:29 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:17.871 18:12:29 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:17.871 18:12:29 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:17.871 18:12:29 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:17.871 18:12:29 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:17.871 18:12:29 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:17.871 18:12:29 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:05:17.871 18:12:29 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:17.871 18:12:29 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:05:17.871 18:12:29 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:17.871 18:12:29 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:05:17.871 18:12:29 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:17.871 18:12:29 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:05:17.871 18:12:29 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:05:17.871 18:12:29 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:05:17.871 No valid GPT data, bailing 00:05:17.871 18:12:29 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:05:17.871 18:12:29 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:17.871 18:12:29 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:17.871 18:12:29 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:05:17.871 18:12:29 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:05:17.871 18:12:29 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:05:17.871 18:12:29 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:17.871 18:12:29 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:17.871 18:12:29 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:17.871 18:12:29 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:05:17.871 18:12:29 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:17.871 18:12:29 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:05:17.871 18:12:29 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:17.871 18:12:29 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:05:17.871 18:12:29 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:17.871 18:12:29 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:05:17.871 18:12:29 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:05:17.871 18:12:29 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:05:17.871 No valid GPT data, bailing 00:05:17.871 18:12:29 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:05:17.871 18:12:29 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:17.871 18:12:29 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:17.871 18:12:29 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:05:17.871 18:12:29 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:05:17.871 18:12:29 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:05:17.871 18:12:29 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:17.871 18:12:29 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:17.871 18:12:29 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:17.871 18:12:29 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:05:17.871 18:12:29 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:17.871 18:12:29 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:17.871 18:12:29 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:17.871 18:12:29 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:05:17.871 18:12:29 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:17.871 18:12:29 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:17.871 18:12:29 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:05:17.871 18:12:29 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:05:18.130 No valid GPT data, bailing 00:05:18.130 18:12:29 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:18.130 18:12:29 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:18.130 18:12:29 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:18.130 18:12:29 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:18.130 18:12:29 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:18.130 18:12:29 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:18.130 18:12:29 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:05:18.130 18:12:29 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:18.130 18:12:29 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:18.130 18:12:29 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:05:18.130 18:12:29 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:05:18.130 18:12:29 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:18.130 18:12:29 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:18.130 18:12:29 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:18.130 18:12:29 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.130 18:12:29 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:18.130 ************************************ 00:05:18.130 START TEST nvme_mount 00:05:18.130 ************************************ 00:05:18.130 18:12:29 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:05:18.130 18:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:18.130 18:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:18.130 18:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:18.130 18:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:18.130 18:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:18.130 18:12:29 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:18.130 18:12:29 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:18.130 18:12:29 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:18.130 18:12:29 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:18.130 18:12:29 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:18.130 18:12:29 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:18.130 18:12:29 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:18.130 18:12:29 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:18.130 18:12:29 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:18.130 18:12:29 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:18.130 18:12:29 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:18.130 18:12:29 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:18.130 18:12:29 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:18.130 18:12:29 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:19.065 Creating new GPT entries in memory. 00:05:19.065 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:19.065 other utilities. 00:05:19.065 18:12:30 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:19.065 18:12:30 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:19.065 18:12:30 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:19.065 18:12:30 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:19.065 18:12:30 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:19.999 Creating new GPT entries in memory. 00:05:19.999 The operation has completed successfully. 00:05:19.999 18:12:31 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:19.999 18:12:31 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:19.999 18:12:31 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 57766 00:05:19.999 18:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:19.999 18:12:32 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:19.999 18:12:32 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:19.999 18:12:32 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:19.999 18:12:32 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:20.258 18:12:32 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:20.258 18:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:20.258 18:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:20.258 18:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:20.258 18:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:20.258 18:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:20.258 18:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:20.258 18:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:20.258 18:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:20.258 18:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:20.258 18:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.258 18:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:20.258 18:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:20.258 18:12:32 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:20.258 18:12:32 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:20.258 18:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:20.258 18:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:20.258 18:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:20.258 18:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.258 18:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:20.258 18:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.516 18:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:20.516 18:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.516 18:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:20.516 18:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.516 18:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:20.516 18:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:20.516 18:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:20.775 18:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:20.775 18:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:20.775 18:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:20.775 18:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:20.775 18:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:20.775 18:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:20.775 18:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:20.775 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:20.775 18:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:20.775 18:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:21.032 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:21.032 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:21.032 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:21.032 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:21.032 18:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:21.032 18:12:32 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:21.032 18:12:32 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:21.032 18:12:32 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:21.032 18:12:32 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:21.032 18:12:32 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:21.033 18:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:21.033 18:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:21.033 18:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:21.033 18:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:21.033 18:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:21.033 18:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:21.033 18:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:21.033 18:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:21.033 18:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:21.033 18:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.033 18:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:21.033 18:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:21.033 18:12:32 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:21.033 18:12:32 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:21.291 18:12:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:21.291 18:12:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:21.291 18:12:33 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:21.291 18:12:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.291 18:12:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:21.291 18:12:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.291 18:12:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:21.291 18:12:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.291 18:12:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:21.291 18:12:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.549 18:12:33 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:21.549 18:12:33 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:21.549 18:12:33 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:21.549 18:12:33 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:21.549 18:12:33 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:21.549 18:12:33 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:21.549 18:12:33 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:05:21.549 18:12:33 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:21.549 18:12:33 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:21.549 18:12:33 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:21.549 18:12:33 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:21.549 18:12:33 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:21.549 18:12:33 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:21.549 18:12:33 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:21.549 18:12:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.549 18:12:33 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:21.549 18:12:33 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:21.549 18:12:33 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:21.549 18:12:33 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:21.806 18:12:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:21.806 18:12:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:21.806 18:12:33 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:21.806 18:12:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.806 18:12:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:21.806 18:12:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.806 18:12:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:21.806 18:12:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.064 18:12:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:22.064 18:12:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.064 18:12:33 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:22.064 18:12:33 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:22.064 18:12:33 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:22.064 18:12:33 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:22.064 18:12:33 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:22.064 18:12:33 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:22.064 18:12:33 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:22.064 18:12:33 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:22.064 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:22.064 00:05:22.064 real 0m4.043s 00:05:22.064 user 0m0.686s 00:05:22.064 sys 0m1.085s 00:05:22.064 18:12:33 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.064 18:12:33 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:22.064 ************************************ 00:05:22.064 END TEST nvme_mount 00:05:22.064 ************************************ 00:05:22.064 18:12:34 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:22.064 18:12:34 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:22.064 18:12:34 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:22.064 18:12:34 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.064 18:12:34 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:22.064 ************************************ 00:05:22.064 START TEST dm_mount 00:05:22.064 ************************************ 00:05:22.064 18:12:34 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:05:22.064 18:12:34 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:22.064 18:12:34 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:22.064 18:12:34 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:22.064 18:12:34 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:22.064 18:12:34 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:22.064 18:12:34 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:22.064 18:12:34 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:22.064 18:12:34 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:22.064 18:12:34 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:22.064 18:12:34 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:22.064 18:12:34 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:22.064 18:12:34 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:22.064 18:12:34 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:22.064 18:12:34 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:22.064 18:12:34 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:22.064 18:12:34 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:22.064 18:12:34 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:22.064 18:12:34 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:22.064 18:12:34 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:22.064 18:12:34 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:22.064 18:12:34 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:23.470 Creating new GPT entries in memory. 00:05:23.470 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:23.470 other utilities. 00:05:23.470 18:12:35 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:23.470 18:12:35 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:23.470 18:12:35 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:23.470 18:12:35 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:23.470 18:12:35 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:24.403 Creating new GPT entries in memory. 00:05:24.403 The operation has completed successfully. 00:05:24.403 18:12:36 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:24.403 18:12:36 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:24.403 18:12:36 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:24.403 18:12:36 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:24.403 18:12:36 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:25.338 The operation has completed successfully. 00:05:25.338 18:12:37 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:25.338 18:12:37 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:25.338 18:12:37 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 58202 00:05:25.338 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:25.338 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:25.338 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:25.338 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:25.338 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:25.338 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:25.338 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:25.338 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:25.338 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:25.338 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:25.338 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:25.338 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:25.338 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:25.338 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:25.338 18:12:37 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:25.338 18:12:37 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:25.338 18:12:37 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:25.338 18:12:37 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:25.338 18:12:37 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:25.338 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:25.338 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:25.338 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:25.338 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:25.338 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:25.338 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:25.338 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:25.338 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:25.338 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:25.338 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.338 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:25.338 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:25.338 18:12:37 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:25.338 18:12:37 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:25.598 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:25.598 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:25.598 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:25.598 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.598 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:25.598 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.598 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:25.598 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.857 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:25.857 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.857 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:25.857 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:25.857 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:25.857 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:25.857 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:25.857 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:25.857 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:25.857 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:25.857 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:25.857 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:25.857 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:25.857 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:25.857 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:25.857 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:25.857 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.857 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:25.857 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:25.857 18:12:37 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:25.857 18:12:37 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:26.116 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:26.116 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:26.116 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:26.116 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.116 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:26.116 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.116 18:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:26.116 18:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.374 18:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:26.374 18:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.374 18:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:26.374 18:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:26.374 18:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:26.374 18:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:26.374 18:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:26.374 18:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:26.374 18:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:26.374 18:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:26.374 18:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:26.374 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:26.374 18:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:26.374 18:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:26.374 00:05:26.374 real 0m4.231s 00:05:26.374 user 0m0.453s 00:05:26.374 sys 0m0.742s 00:05:26.374 18:12:38 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.374 18:12:38 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:26.374 ************************************ 00:05:26.374 END TEST dm_mount 00:05:26.374 ************************************ 00:05:26.374 18:12:38 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:26.374 18:12:38 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:26.374 18:12:38 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:26.374 18:12:38 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:26.374 18:12:38 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:26.374 18:12:38 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:26.374 18:12:38 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:26.374 18:12:38 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:26.634 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:26.634 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:26.634 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:26.634 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:26.634 18:12:38 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:26.634 18:12:38 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:26.634 18:12:38 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:26.634 18:12:38 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:26.634 18:12:38 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:26.634 18:12:38 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:26.634 18:12:38 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:26.634 00:05:26.634 real 0m9.794s 00:05:26.634 user 0m1.784s 00:05:26.634 sys 0m2.412s 00:05:26.634 18:12:38 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.634 18:12:38 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:26.634 ************************************ 00:05:26.634 END TEST devices 00:05:26.634 ************************************ 00:05:26.634 18:12:38 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:26.634 ************************************ 00:05:26.634 END TEST setup.sh 00:05:26.634 ************************************ 00:05:26.634 00:05:26.634 real 0m21.717s 00:05:26.634 user 0m7.078s 00:05:26.634 sys 0m9.173s 00:05:26.634 18:12:38 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.634 18:12:38 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:26.892 18:12:38 -- common/autotest_common.sh@1142 -- # return 0 00:05:26.892 18:12:38 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:27.459 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:27.459 Hugepages 00:05:27.459 node hugesize free / total 00:05:27.459 node0 1048576kB 0 / 0 00:05:27.459 node0 2048kB 2048 / 2048 00:05:27.459 00:05:27.459 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:27.459 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:27.459 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:27.717 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:05:27.717 18:12:39 -- spdk/autotest.sh@130 -- # uname -s 00:05:27.717 18:12:39 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:27.717 18:12:39 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:27.717 18:12:39 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:28.282 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:28.282 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:28.540 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:28.540 18:12:40 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:29.477 18:12:41 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:29.477 18:12:41 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:29.477 18:12:41 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:29.477 18:12:41 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:29.477 18:12:41 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:29.477 18:12:41 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:29.477 18:12:41 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:29.477 18:12:41 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:29.477 18:12:41 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:29.477 18:12:41 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:05:29.477 18:12:41 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:29.477 18:12:41 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:30.045 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:30.045 Waiting for block devices as requested 00:05:30.045 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:30.045 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:30.305 18:12:42 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:30.305 18:12:42 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:30.305 18:12:42 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:30.305 18:12:42 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:05:30.305 18:12:42 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:30.305 18:12:42 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:30.305 18:12:42 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:30.305 18:12:42 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:05:30.305 18:12:42 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:05:30.305 18:12:42 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:05:30.305 18:12:42 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:05:30.305 18:12:42 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:30.305 18:12:42 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:30.305 18:12:42 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:05:30.305 18:12:42 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:30.305 18:12:42 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:30.305 18:12:42 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:05:30.305 18:12:42 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:30.305 18:12:42 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:30.305 18:12:42 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:30.305 18:12:42 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:30.305 18:12:42 -- common/autotest_common.sh@1557 -- # continue 00:05:30.305 18:12:42 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:30.305 18:12:42 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:30.305 18:12:42 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:30.305 18:12:42 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:05:30.305 18:12:42 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:30.305 18:12:42 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:30.305 18:12:42 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:30.305 18:12:42 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:30.305 18:12:42 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:30.305 18:12:42 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:30.305 18:12:42 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:30.305 18:12:42 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:30.305 18:12:42 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:30.305 18:12:42 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:05:30.305 18:12:42 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:30.305 18:12:42 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:30.305 18:12:42 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:30.305 18:12:42 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:30.305 18:12:42 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:30.305 18:12:42 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:30.305 18:12:42 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:30.305 18:12:42 -- common/autotest_common.sh@1557 -- # continue 00:05:30.305 18:12:42 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:30.305 18:12:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:30.305 18:12:42 -- common/autotest_common.sh@10 -- # set +x 00:05:30.305 18:12:42 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:30.305 18:12:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:30.305 18:12:42 -- common/autotest_common.sh@10 -- # set +x 00:05:30.305 18:12:42 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:30.871 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:31.129 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:31.129 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:31.129 18:12:43 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:31.129 18:12:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:31.129 18:12:43 -- common/autotest_common.sh@10 -- # set +x 00:05:31.129 18:12:43 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:31.129 18:12:43 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:31.129 18:12:43 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:31.129 18:12:43 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:31.129 18:12:43 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:31.129 18:12:43 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:31.129 18:12:43 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:31.129 18:12:43 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:31.129 18:12:43 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:31.129 18:12:43 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:31.129 18:12:43 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:31.420 18:12:43 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:05:31.420 18:12:43 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:31.420 18:12:43 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:31.420 18:12:43 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:31.420 18:12:43 -- common/autotest_common.sh@1580 -- # device=0x0010 00:05:31.420 18:12:43 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:31.420 18:12:43 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:31.420 18:12:43 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:31.420 18:12:43 -- common/autotest_common.sh@1580 -- # device=0x0010 00:05:31.420 18:12:43 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:31.420 18:12:43 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:05:31.420 18:12:43 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:05:31.420 18:12:43 -- common/autotest_common.sh@1593 -- # return 0 00:05:31.420 18:12:43 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:31.420 18:12:43 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:31.420 18:12:43 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:31.420 18:12:43 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:31.420 18:12:43 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:31.420 18:12:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:31.420 18:12:43 -- common/autotest_common.sh@10 -- # set +x 00:05:31.420 18:12:43 -- spdk/autotest.sh@164 -- # [[ 1 -eq 1 ]] 00:05:31.420 18:12:43 -- spdk/autotest.sh@165 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:05:31.420 18:12:43 -- spdk/autotest.sh@165 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:05:31.420 18:12:43 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:31.420 18:12:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.420 18:12:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.420 18:12:43 -- common/autotest_common.sh@10 -- # set +x 00:05:31.420 ************************************ 00:05:31.420 START TEST env 00:05:31.420 ************************************ 00:05:31.420 18:12:43 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:31.420 * Looking for test storage... 00:05:31.420 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:31.420 18:12:43 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:31.420 18:12:43 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.420 18:12:43 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.420 18:12:43 env -- common/autotest_common.sh@10 -- # set +x 00:05:31.420 ************************************ 00:05:31.420 START TEST env_memory 00:05:31.420 ************************************ 00:05:31.420 18:12:43 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:31.420 00:05:31.420 00:05:31.420 CUnit - A unit testing framework for C - Version 2.1-3 00:05:31.420 http://cunit.sourceforge.net/ 00:05:31.420 00:05:31.420 00:05:31.420 Suite: memory 00:05:31.420 Test: alloc and free memory map ...[2024-07-22 18:12:43.380753] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:31.420 passed 00:05:31.682 Test: mem map translation ...[2024-07-22 18:12:43.442317] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:31.682 [2024-07-22 18:12:43.442401] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:31.682 [2024-07-22 18:12:43.442509] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:31.682 [2024-07-22 18:12:43.442541] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:31.682 passed 00:05:31.682 Test: mem map registration ...[2024-07-22 18:12:43.541284] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:31.682 [2024-07-22 18:12:43.541375] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:31.682 passed 00:05:31.682 Test: mem map adjacent registrations ...passed 00:05:31.682 00:05:31.682 Run Summary: Type Total Ran Passed Failed Inactive 00:05:31.682 suites 1 1 n/a 0 0 00:05:31.683 tests 4 4 4 0 0 00:05:31.683 asserts 152 152 152 0 n/a 00:05:31.683 00:05:31.683 Elapsed time = 0.346 seconds 00:05:31.683 00:05:31.683 real 0m0.388s 00:05:31.683 user 0m0.353s 00:05:31.683 sys 0m0.029s 00:05:31.683 18:12:43 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.683 ************************************ 00:05:31.683 END TEST env_memory 00:05:31.683 ************************************ 00:05:31.683 18:12:43 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:31.941 18:12:43 env -- common/autotest_common.sh@1142 -- # return 0 00:05:31.941 18:12:43 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:31.941 18:12:43 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.941 18:12:43 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.941 18:12:43 env -- common/autotest_common.sh@10 -- # set +x 00:05:31.941 ************************************ 00:05:31.941 START TEST env_vtophys 00:05:31.941 ************************************ 00:05:31.941 18:12:43 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:31.941 EAL: lib.eal log level changed from notice to debug 00:05:31.941 EAL: Detected lcore 0 as core 0 on socket 0 00:05:31.941 EAL: Detected lcore 1 as core 0 on socket 0 00:05:31.941 EAL: Detected lcore 2 as core 0 on socket 0 00:05:31.941 EAL: Detected lcore 3 as core 0 on socket 0 00:05:31.941 EAL: Detected lcore 4 as core 0 on socket 0 00:05:31.941 EAL: Detected lcore 5 as core 0 on socket 0 00:05:31.941 EAL: Detected lcore 6 as core 0 on socket 0 00:05:31.941 EAL: Detected lcore 7 as core 0 on socket 0 00:05:31.941 EAL: Detected lcore 8 as core 0 on socket 0 00:05:31.941 EAL: Detected lcore 9 as core 0 on socket 0 00:05:31.941 EAL: Maximum logical cores by configuration: 128 00:05:31.941 EAL: Detected CPU lcores: 10 00:05:31.941 EAL: Detected NUMA nodes: 1 00:05:31.941 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:31.941 EAL: Detected shared linkage of DPDK 00:05:31.941 EAL: No shared files mode enabled, IPC will be disabled 00:05:31.941 EAL: Selected IOVA mode 'PA' 00:05:31.941 EAL: Probing VFIO support... 00:05:31.941 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:31.941 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:31.941 EAL: Ask a virtual area of 0x2e000 bytes 00:05:31.941 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:31.941 EAL: Setting up physically contiguous memory... 00:05:31.941 EAL: Setting maximum number of open files to 524288 00:05:31.941 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:31.941 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:31.941 EAL: Ask a virtual area of 0x61000 bytes 00:05:31.941 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:31.941 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:31.941 EAL: Ask a virtual area of 0x400000000 bytes 00:05:31.941 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:31.941 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:31.941 EAL: Ask a virtual area of 0x61000 bytes 00:05:31.941 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:31.941 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:31.941 EAL: Ask a virtual area of 0x400000000 bytes 00:05:31.941 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:31.941 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:31.941 EAL: Ask a virtual area of 0x61000 bytes 00:05:31.941 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:31.941 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:31.941 EAL: Ask a virtual area of 0x400000000 bytes 00:05:31.941 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:31.941 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:31.941 EAL: Ask a virtual area of 0x61000 bytes 00:05:31.941 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:31.941 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:31.941 EAL: Ask a virtual area of 0x400000000 bytes 00:05:31.941 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:31.941 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:31.941 EAL: Hugepages will be freed exactly as allocated. 00:05:31.941 EAL: No shared files mode enabled, IPC is disabled 00:05:31.941 EAL: No shared files mode enabled, IPC is disabled 00:05:31.941 EAL: TSC frequency is ~2200000 KHz 00:05:31.941 EAL: Main lcore 0 is ready (tid=7f178930ba40;cpuset=[0]) 00:05:31.941 EAL: Trying to obtain current memory policy. 00:05:31.941 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.941 EAL: Restoring previous memory policy: 0 00:05:31.941 EAL: request: mp_malloc_sync 00:05:31.941 EAL: No shared files mode enabled, IPC is disabled 00:05:31.941 EAL: Heap on socket 0 was expanded by 2MB 00:05:31.941 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:32.200 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:32.200 EAL: Mem event callback 'spdk:(nil)' registered 00:05:32.200 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:32.200 00:05:32.200 00:05:32.200 CUnit - A unit testing framework for C - Version 2.1-3 00:05:32.200 http://cunit.sourceforge.net/ 00:05:32.200 00:05:32.200 00:05:32.200 Suite: components_suite 00:05:32.458 Test: vtophys_malloc_test ...passed 00:05:32.716 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:32.716 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.716 EAL: Restoring previous memory policy: 4 00:05:32.716 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.716 EAL: request: mp_malloc_sync 00:05:32.716 EAL: No shared files mode enabled, IPC is disabled 00:05:32.716 EAL: Heap on socket 0 was expanded by 4MB 00:05:32.716 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.716 EAL: request: mp_malloc_sync 00:05:32.716 EAL: No shared files mode enabled, IPC is disabled 00:05:32.716 EAL: Heap on socket 0 was shrunk by 4MB 00:05:32.716 EAL: Trying to obtain current memory policy. 00:05:32.716 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.716 EAL: Restoring previous memory policy: 4 00:05:32.716 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.716 EAL: request: mp_malloc_sync 00:05:32.716 EAL: No shared files mode enabled, IPC is disabled 00:05:32.716 EAL: Heap on socket 0 was expanded by 6MB 00:05:32.716 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.716 EAL: request: mp_malloc_sync 00:05:32.716 EAL: No shared files mode enabled, IPC is disabled 00:05:32.716 EAL: Heap on socket 0 was shrunk by 6MB 00:05:32.716 EAL: Trying to obtain current memory policy. 00:05:32.716 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.716 EAL: Restoring previous memory policy: 4 00:05:32.716 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.716 EAL: request: mp_malloc_sync 00:05:32.716 EAL: No shared files mode enabled, IPC is disabled 00:05:32.716 EAL: Heap on socket 0 was expanded by 10MB 00:05:32.716 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.716 EAL: request: mp_malloc_sync 00:05:32.716 EAL: No shared files mode enabled, IPC is disabled 00:05:32.716 EAL: Heap on socket 0 was shrunk by 10MB 00:05:32.716 EAL: Trying to obtain current memory policy. 00:05:32.716 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.716 EAL: Restoring previous memory policy: 4 00:05:32.716 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.716 EAL: request: mp_malloc_sync 00:05:32.716 EAL: No shared files mode enabled, IPC is disabled 00:05:32.716 EAL: Heap on socket 0 was expanded by 18MB 00:05:32.716 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.716 EAL: request: mp_malloc_sync 00:05:32.716 EAL: No shared files mode enabled, IPC is disabled 00:05:32.716 EAL: Heap on socket 0 was shrunk by 18MB 00:05:32.716 EAL: Trying to obtain current memory policy. 00:05:32.716 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.716 EAL: Restoring previous memory policy: 4 00:05:32.716 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.716 EAL: request: mp_malloc_sync 00:05:32.716 EAL: No shared files mode enabled, IPC is disabled 00:05:32.716 EAL: Heap on socket 0 was expanded by 34MB 00:05:32.716 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.716 EAL: request: mp_malloc_sync 00:05:32.716 EAL: No shared files mode enabled, IPC is disabled 00:05:32.716 EAL: Heap on socket 0 was shrunk by 34MB 00:05:32.716 EAL: Trying to obtain current memory policy. 00:05:32.716 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.716 EAL: Restoring previous memory policy: 4 00:05:32.716 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.716 EAL: request: mp_malloc_sync 00:05:32.716 EAL: No shared files mode enabled, IPC is disabled 00:05:32.716 EAL: Heap on socket 0 was expanded by 66MB 00:05:32.975 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.975 EAL: request: mp_malloc_sync 00:05:32.975 EAL: No shared files mode enabled, IPC is disabled 00:05:32.975 EAL: Heap on socket 0 was shrunk by 66MB 00:05:32.975 EAL: Trying to obtain current memory policy. 00:05:32.975 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.975 EAL: Restoring previous memory policy: 4 00:05:32.975 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.975 EAL: request: mp_malloc_sync 00:05:32.975 EAL: No shared files mode enabled, IPC is disabled 00:05:32.975 EAL: Heap on socket 0 was expanded by 130MB 00:05:33.233 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.233 EAL: request: mp_malloc_sync 00:05:33.233 EAL: No shared files mode enabled, IPC is disabled 00:05:33.233 EAL: Heap on socket 0 was shrunk by 130MB 00:05:33.503 EAL: Trying to obtain current memory policy. 00:05:33.503 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:33.503 EAL: Restoring previous memory policy: 4 00:05:33.503 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.503 EAL: request: mp_malloc_sync 00:05:33.503 EAL: No shared files mode enabled, IPC is disabled 00:05:33.503 EAL: Heap on socket 0 was expanded by 258MB 00:05:34.069 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.069 EAL: request: mp_malloc_sync 00:05:34.069 EAL: No shared files mode enabled, IPC is disabled 00:05:34.069 EAL: Heap on socket 0 was shrunk by 258MB 00:05:34.635 EAL: Trying to obtain current memory policy. 00:05:34.635 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.635 EAL: Restoring previous memory policy: 4 00:05:34.635 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.635 EAL: request: mp_malloc_sync 00:05:34.635 EAL: No shared files mode enabled, IPC is disabled 00:05:34.635 EAL: Heap on socket 0 was expanded by 514MB 00:05:35.677 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.677 EAL: request: mp_malloc_sync 00:05:35.677 EAL: No shared files mode enabled, IPC is disabled 00:05:35.677 EAL: Heap on socket 0 was shrunk by 514MB 00:05:36.257 EAL: Trying to obtain current memory policy. 00:05:36.257 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.536 EAL: Restoring previous memory policy: 4 00:05:36.536 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.536 EAL: request: mp_malloc_sync 00:05:36.536 EAL: No shared files mode enabled, IPC is disabled 00:05:36.536 EAL: Heap on socket 0 was expanded by 1026MB 00:05:38.433 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.691 EAL: request: mp_malloc_sync 00:05:38.691 EAL: No shared files mode enabled, IPC is disabled 00:05:38.691 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:40.637 passed 00:05:40.637 00:05:40.637 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.637 suites 1 1 n/a 0 0 00:05:40.637 tests 2 2 2 0 0 00:05:40.637 asserts 5306 5306 5306 0 n/a 00:05:40.637 00:05:40.637 Elapsed time = 8.128 seconds 00:05:40.637 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.637 EAL: request: mp_malloc_sync 00:05:40.637 EAL: No shared files mode enabled, IPC is disabled 00:05:40.637 EAL: Heap on socket 0 was shrunk by 2MB 00:05:40.637 EAL: No shared files mode enabled, IPC is disabled 00:05:40.637 EAL: No shared files mode enabled, IPC is disabled 00:05:40.637 EAL: No shared files mode enabled, IPC is disabled 00:05:40.637 ************************************ 00:05:40.637 END TEST env_vtophys 00:05:40.637 ************************************ 00:05:40.637 00:05:40.637 real 0m8.476s 00:05:40.637 user 0m7.214s 00:05:40.637 sys 0m1.082s 00:05:40.637 18:12:52 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.637 18:12:52 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:40.637 18:12:52 env -- common/autotest_common.sh@1142 -- # return 0 00:05:40.637 18:12:52 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:40.637 18:12:52 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:40.637 18:12:52 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.637 18:12:52 env -- common/autotest_common.sh@10 -- # set +x 00:05:40.637 ************************************ 00:05:40.637 START TEST env_pci 00:05:40.637 ************************************ 00:05:40.637 18:12:52 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:40.637 00:05:40.637 00:05:40.637 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.637 http://cunit.sourceforge.net/ 00:05:40.637 00:05:40.637 00:05:40.637 Suite: pci 00:05:40.637 Test: pci_hook ...[2024-07-22 18:12:52.298227] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 59473 has claimed it 00:05:40.637 passed 00:05:40.637 00:05:40.637 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.637 suites 1 1 n/a 0 0 00:05:40.637 tests 1 1 1 0 0 00:05:40.637 asserts 25 25 25 0 n/a 00:05:40.637 00:05:40.637 Elapsed time = 0.007 seconds 00:05:40.637 EAL: Cannot find device (10000:00:01.0) 00:05:40.637 EAL: Failed to attach device on primary process 00:05:40.637 ************************************ 00:05:40.637 END TEST env_pci 00:05:40.637 ************************************ 00:05:40.637 00:05:40.637 real 0m0.067s 00:05:40.637 user 0m0.030s 00:05:40.637 sys 0m0.036s 00:05:40.637 18:12:52 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.637 18:12:52 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:40.637 18:12:52 env -- common/autotest_common.sh@1142 -- # return 0 00:05:40.637 18:12:52 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:40.637 18:12:52 env -- env/env.sh@15 -- # uname 00:05:40.637 18:12:52 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:40.637 18:12:52 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:40.637 18:12:52 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:40.637 18:12:52 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:40.637 18:12:52 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.637 18:12:52 env -- common/autotest_common.sh@10 -- # set +x 00:05:40.637 ************************************ 00:05:40.637 START TEST env_dpdk_post_init 00:05:40.637 ************************************ 00:05:40.637 18:12:52 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:40.637 EAL: Detected CPU lcores: 10 00:05:40.637 EAL: Detected NUMA nodes: 1 00:05:40.637 EAL: Detected shared linkage of DPDK 00:05:40.637 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:40.637 EAL: Selected IOVA mode 'PA' 00:05:40.637 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:40.637 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:40.637 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:40.637 Starting DPDK initialization... 00:05:40.637 Starting SPDK post initialization... 00:05:40.637 SPDK NVMe probe 00:05:40.637 Attaching to 0000:00:10.0 00:05:40.637 Attaching to 0000:00:11.0 00:05:40.637 Attached to 0000:00:10.0 00:05:40.637 Attached to 0000:00:11.0 00:05:40.637 Cleaning up... 00:05:40.637 00:05:40.637 real 0m0.273s 00:05:40.637 user 0m0.081s 00:05:40.637 sys 0m0.092s 00:05:40.637 18:12:52 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.637 ************************************ 00:05:40.637 END TEST env_dpdk_post_init 00:05:40.637 18:12:52 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:40.637 ************************************ 00:05:40.896 18:12:52 env -- common/autotest_common.sh@1142 -- # return 0 00:05:40.896 18:12:52 env -- env/env.sh@26 -- # uname 00:05:40.896 18:12:52 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:40.896 18:12:52 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:40.896 18:12:52 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:40.896 18:12:52 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.896 18:12:52 env -- common/autotest_common.sh@10 -- # set +x 00:05:40.896 ************************************ 00:05:40.896 START TEST env_mem_callbacks 00:05:40.896 ************************************ 00:05:40.896 18:12:52 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:40.896 EAL: Detected CPU lcores: 10 00:05:40.896 EAL: Detected NUMA nodes: 1 00:05:40.896 EAL: Detected shared linkage of DPDK 00:05:40.896 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:40.896 EAL: Selected IOVA mode 'PA' 00:05:40.896 00:05:40.896 00:05:40.896 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.896 http://cunit.sourceforge.net/ 00:05:40.896 00:05:40.896 00:05:40.896 Suite: memory 00:05:40.896 Test: test ... 00:05:40.896 register 0x200000200000 2097152 00:05:40.896 malloc 3145728 00:05:40.896 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:40.896 register 0x200000400000 4194304 00:05:40.896 buf 0x2000004fffc0 len 3145728 PASSED 00:05:40.896 malloc 64 00:05:40.896 buf 0x2000004ffec0 len 64 PASSED 00:05:40.896 malloc 4194304 00:05:40.896 register 0x200000800000 6291456 00:05:40.896 buf 0x2000009fffc0 len 4194304 PASSED 00:05:40.896 free 0x2000004fffc0 3145728 00:05:40.896 free 0x2000004ffec0 64 00:05:40.896 unregister 0x200000400000 4194304 PASSED 00:05:40.896 free 0x2000009fffc0 4194304 00:05:41.155 unregister 0x200000800000 6291456 PASSED 00:05:41.155 malloc 8388608 00:05:41.155 register 0x200000400000 10485760 00:05:41.155 buf 0x2000005fffc0 len 8388608 PASSED 00:05:41.155 free 0x2000005fffc0 8388608 00:05:41.155 unregister 0x200000400000 10485760 PASSED 00:05:41.155 passed 00:05:41.155 00:05:41.155 Run Summary: Type Total Ran Passed Failed Inactive 00:05:41.155 suites 1 1 n/a 0 0 00:05:41.155 tests 1 1 1 0 0 00:05:41.155 asserts 15 15 15 0 n/a 00:05:41.155 00:05:41.155 Elapsed time = 0.062 seconds 00:05:41.155 00:05:41.155 real 0m0.259s 00:05:41.155 user 0m0.083s 00:05:41.155 sys 0m0.072s 00:05:41.155 18:12:52 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.155 18:12:52 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:41.155 ************************************ 00:05:41.155 END TEST env_mem_callbacks 00:05:41.155 ************************************ 00:05:41.155 18:12:53 env -- common/autotest_common.sh@1142 -- # return 0 00:05:41.155 ************************************ 00:05:41.155 END TEST env 00:05:41.155 ************************************ 00:05:41.155 00:05:41.155 real 0m9.791s 00:05:41.155 user 0m7.871s 00:05:41.155 sys 0m1.514s 00:05:41.155 18:12:53 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.155 18:12:53 env -- common/autotest_common.sh@10 -- # set +x 00:05:41.155 18:12:53 -- common/autotest_common.sh@1142 -- # return 0 00:05:41.155 18:12:53 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:41.155 18:12:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:41.155 18:12:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.155 18:12:53 -- common/autotest_common.sh@10 -- # set +x 00:05:41.155 ************************************ 00:05:41.155 START TEST rpc 00:05:41.155 ************************************ 00:05:41.155 18:12:53 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:41.155 * Looking for test storage... 00:05:41.155 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:41.155 18:12:53 rpc -- rpc/rpc.sh@65 -- # spdk_pid=59588 00:05:41.155 18:12:53 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:41.155 18:12:53 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:41.155 18:12:53 rpc -- rpc/rpc.sh@67 -- # waitforlisten 59588 00:05:41.155 18:12:53 rpc -- common/autotest_common.sh@829 -- # '[' -z 59588 ']' 00:05:41.155 18:12:53 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.155 18:12:53 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:41.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.155 18:12:53 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.155 18:12:53 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:41.155 18:12:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.414 [2024-07-22 18:12:53.250234] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:05:41.414 [2024-07-22 18:12:53.250413] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59588 ] 00:05:41.414 [2024-07-22 18:12:53.417120] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.672 [2024-07-22 18:12:53.649133] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:41.672 [2024-07-22 18:12:53.649613] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 59588' to capture a snapshot of events at runtime. 00:05:41.672 [2024-07-22 18:12:53.649691] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:41.672 [2024-07-22 18:12:53.649724] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:41.672 [2024-07-22 18:12:53.649752] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid59588 for offline analysis/debug. 00:05:41.672 [2024-07-22 18:12:53.649845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.930 [2024-07-22 18:12:53.858319] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:42.497 18:12:54 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:42.497 18:12:54 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:42.498 18:12:54 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:42.498 18:12:54 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:42.498 18:12:54 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:42.498 18:12:54 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:42.498 18:12:54 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.498 18:12:54 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.498 18:12:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.498 ************************************ 00:05:42.498 START TEST rpc_integrity 00:05:42.498 ************************************ 00:05:42.498 18:12:54 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:42.498 18:12:54 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:42.498 18:12:54 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.498 18:12:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.498 18:12:54 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.498 18:12:54 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:42.498 18:12:54 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:42.756 18:12:54 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:42.756 18:12:54 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:42.756 18:12:54 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.756 18:12:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.756 18:12:54 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.756 18:12:54 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:42.756 18:12:54 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:42.756 18:12:54 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.756 18:12:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.756 18:12:54 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.756 18:12:54 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:42.756 { 00:05:42.756 "name": "Malloc0", 00:05:42.756 "aliases": [ 00:05:42.756 "3489e9c8-824a-4e21-bd62-e2dd270cd5d8" 00:05:42.756 ], 00:05:42.757 "product_name": "Malloc disk", 00:05:42.757 "block_size": 512, 00:05:42.757 "num_blocks": 16384, 00:05:42.757 "uuid": "3489e9c8-824a-4e21-bd62-e2dd270cd5d8", 00:05:42.757 "assigned_rate_limits": { 00:05:42.757 "rw_ios_per_sec": 0, 00:05:42.757 "rw_mbytes_per_sec": 0, 00:05:42.757 "r_mbytes_per_sec": 0, 00:05:42.757 "w_mbytes_per_sec": 0 00:05:42.757 }, 00:05:42.757 "claimed": false, 00:05:42.757 "zoned": false, 00:05:42.757 "supported_io_types": { 00:05:42.757 "read": true, 00:05:42.757 "write": true, 00:05:42.757 "unmap": true, 00:05:42.757 "flush": true, 00:05:42.757 "reset": true, 00:05:42.757 "nvme_admin": false, 00:05:42.757 "nvme_io": false, 00:05:42.757 "nvme_io_md": false, 00:05:42.757 "write_zeroes": true, 00:05:42.757 "zcopy": true, 00:05:42.757 "get_zone_info": false, 00:05:42.757 "zone_management": false, 00:05:42.757 "zone_append": false, 00:05:42.757 "compare": false, 00:05:42.757 "compare_and_write": false, 00:05:42.757 "abort": true, 00:05:42.757 "seek_hole": false, 00:05:42.757 "seek_data": false, 00:05:42.757 "copy": true, 00:05:42.757 "nvme_iov_md": false 00:05:42.757 }, 00:05:42.757 "memory_domains": [ 00:05:42.757 { 00:05:42.757 "dma_device_id": "system", 00:05:42.757 "dma_device_type": 1 00:05:42.757 }, 00:05:42.757 { 00:05:42.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.757 "dma_device_type": 2 00:05:42.757 } 00:05:42.757 ], 00:05:42.757 "driver_specific": {} 00:05:42.757 } 00:05:42.757 ]' 00:05:42.757 18:12:54 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:42.757 18:12:54 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:42.757 18:12:54 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:42.757 18:12:54 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.757 18:12:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.757 [2024-07-22 18:12:54.660519] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:42.757 [2024-07-22 18:12:54.660660] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:42.757 [2024-07-22 18:12:54.660738] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:05:42.757 [2024-07-22 18:12:54.660802] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:42.757 [2024-07-22 18:12:54.664048] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:42.757 [2024-07-22 18:12:54.664107] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:42.757 Passthru0 00:05:42.757 18:12:54 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.757 18:12:54 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:42.757 18:12:54 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.757 18:12:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.757 18:12:54 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.757 18:12:54 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:42.757 { 00:05:42.757 "name": "Malloc0", 00:05:42.757 "aliases": [ 00:05:42.757 "3489e9c8-824a-4e21-bd62-e2dd270cd5d8" 00:05:42.757 ], 00:05:42.757 "product_name": "Malloc disk", 00:05:42.757 "block_size": 512, 00:05:42.757 "num_blocks": 16384, 00:05:42.757 "uuid": "3489e9c8-824a-4e21-bd62-e2dd270cd5d8", 00:05:42.757 "assigned_rate_limits": { 00:05:42.757 "rw_ios_per_sec": 0, 00:05:42.757 "rw_mbytes_per_sec": 0, 00:05:42.757 "r_mbytes_per_sec": 0, 00:05:42.757 "w_mbytes_per_sec": 0 00:05:42.757 }, 00:05:42.757 "claimed": true, 00:05:42.757 "claim_type": "exclusive_write", 00:05:42.757 "zoned": false, 00:05:42.757 "supported_io_types": { 00:05:42.757 "read": true, 00:05:42.757 "write": true, 00:05:42.757 "unmap": true, 00:05:42.757 "flush": true, 00:05:42.757 "reset": true, 00:05:42.757 "nvme_admin": false, 00:05:42.757 "nvme_io": false, 00:05:42.757 "nvme_io_md": false, 00:05:42.757 "write_zeroes": true, 00:05:42.757 "zcopy": true, 00:05:42.757 "get_zone_info": false, 00:05:42.757 "zone_management": false, 00:05:42.757 "zone_append": false, 00:05:42.757 "compare": false, 00:05:42.757 "compare_and_write": false, 00:05:42.757 "abort": true, 00:05:42.757 "seek_hole": false, 00:05:42.757 "seek_data": false, 00:05:42.757 "copy": true, 00:05:42.757 "nvme_iov_md": false 00:05:42.757 }, 00:05:42.757 "memory_domains": [ 00:05:42.757 { 00:05:42.757 "dma_device_id": "system", 00:05:42.757 "dma_device_type": 1 00:05:42.757 }, 00:05:42.757 { 00:05:42.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.757 "dma_device_type": 2 00:05:42.757 } 00:05:42.757 ], 00:05:42.757 "driver_specific": {} 00:05:42.757 }, 00:05:42.757 { 00:05:42.757 "name": "Passthru0", 00:05:42.757 "aliases": [ 00:05:42.757 "a0dd52bc-53a2-544a-9163-fec1336e1a60" 00:05:42.757 ], 00:05:42.757 "product_name": "passthru", 00:05:42.757 "block_size": 512, 00:05:42.757 "num_blocks": 16384, 00:05:42.757 "uuid": "a0dd52bc-53a2-544a-9163-fec1336e1a60", 00:05:42.757 "assigned_rate_limits": { 00:05:42.757 "rw_ios_per_sec": 0, 00:05:42.757 "rw_mbytes_per_sec": 0, 00:05:42.757 "r_mbytes_per_sec": 0, 00:05:42.757 "w_mbytes_per_sec": 0 00:05:42.757 }, 00:05:42.757 "claimed": false, 00:05:42.757 "zoned": false, 00:05:42.757 "supported_io_types": { 00:05:42.757 "read": true, 00:05:42.757 "write": true, 00:05:42.757 "unmap": true, 00:05:42.757 "flush": true, 00:05:42.757 "reset": true, 00:05:42.757 "nvme_admin": false, 00:05:42.757 "nvme_io": false, 00:05:42.757 "nvme_io_md": false, 00:05:42.757 "write_zeroes": true, 00:05:42.758 "zcopy": true, 00:05:42.758 "get_zone_info": false, 00:05:42.758 "zone_management": false, 00:05:42.758 "zone_append": false, 00:05:42.758 "compare": false, 00:05:42.758 "compare_and_write": false, 00:05:42.758 "abort": true, 00:05:42.758 "seek_hole": false, 00:05:42.758 "seek_data": false, 00:05:42.758 "copy": true, 00:05:42.758 "nvme_iov_md": false 00:05:42.758 }, 00:05:42.758 "memory_domains": [ 00:05:42.758 { 00:05:42.758 "dma_device_id": "system", 00:05:42.758 "dma_device_type": 1 00:05:42.758 }, 00:05:42.758 { 00:05:42.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.758 "dma_device_type": 2 00:05:42.758 } 00:05:42.758 ], 00:05:42.758 "driver_specific": { 00:05:42.758 "passthru": { 00:05:42.758 "name": "Passthru0", 00:05:42.758 "base_bdev_name": "Malloc0" 00:05:42.758 } 00:05:42.758 } 00:05:42.758 } 00:05:42.758 ]' 00:05:42.758 18:12:54 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:42.758 18:12:54 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:42.758 18:12:54 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:42.758 18:12:54 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.758 18:12:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.017 18:12:54 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.017 18:12:54 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:43.017 18:12:54 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.017 18:12:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.017 18:12:54 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.017 18:12:54 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:43.017 18:12:54 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.017 18:12:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.017 18:12:54 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.017 18:12:54 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:43.017 18:12:54 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:43.017 ************************************ 00:05:43.017 END TEST rpc_integrity 00:05:43.017 ************************************ 00:05:43.017 18:12:54 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:43.017 00:05:43.017 real 0m0.388s 00:05:43.017 user 0m0.251s 00:05:43.017 sys 0m0.038s 00:05:43.017 18:12:54 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.017 18:12:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.017 18:12:54 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:43.017 18:12:54 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:43.017 18:12:54 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.017 18:12:54 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.017 18:12:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.017 ************************************ 00:05:43.017 START TEST rpc_plugins 00:05:43.017 ************************************ 00:05:43.017 18:12:54 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:43.017 18:12:54 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:43.017 18:12:54 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.017 18:12:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:43.017 18:12:54 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.017 18:12:54 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:43.017 18:12:54 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:43.017 18:12:54 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.017 18:12:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:43.017 18:12:54 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.017 18:12:54 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:43.017 { 00:05:43.017 "name": "Malloc1", 00:05:43.017 "aliases": [ 00:05:43.017 "82822fa3-bbcc-4960-8ace-13e3a7e3b31c" 00:05:43.017 ], 00:05:43.017 "product_name": "Malloc disk", 00:05:43.017 "block_size": 4096, 00:05:43.017 "num_blocks": 256, 00:05:43.017 "uuid": "82822fa3-bbcc-4960-8ace-13e3a7e3b31c", 00:05:43.017 "assigned_rate_limits": { 00:05:43.017 "rw_ios_per_sec": 0, 00:05:43.017 "rw_mbytes_per_sec": 0, 00:05:43.017 "r_mbytes_per_sec": 0, 00:05:43.017 "w_mbytes_per_sec": 0 00:05:43.017 }, 00:05:43.017 "claimed": false, 00:05:43.017 "zoned": false, 00:05:43.017 "supported_io_types": { 00:05:43.017 "read": true, 00:05:43.017 "write": true, 00:05:43.017 "unmap": true, 00:05:43.017 "flush": true, 00:05:43.017 "reset": true, 00:05:43.017 "nvme_admin": false, 00:05:43.017 "nvme_io": false, 00:05:43.017 "nvme_io_md": false, 00:05:43.017 "write_zeroes": true, 00:05:43.017 "zcopy": true, 00:05:43.017 "get_zone_info": false, 00:05:43.017 "zone_management": false, 00:05:43.017 "zone_append": false, 00:05:43.017 "compare": false, 00:05:43.017 "compare_and_write": false, 00:05:43.017 "abort": true, 00:05:43.017 "seek_hole": false, 00:05:43.017 "seek_data": false, 00:05:43.017 "copy": true, 00:05:43.017 "nvme_iov_md": false 00:05:43.017 }, 00:05:43.017 "memory_domains": [ 00:05:43.017 { 00:05:43.017 "dma_device_id": "system", 00:05:43.017 "dma_device_type": 1 00:05:43.017 }, 00:05:43.017 { 00:05:43.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:43.017 "dma_device_type": 2 00:05:43.017 } 00:05:43.017 ], 00:05:43.017 "driver_specific": {} 00:05:43.017 } 00:05:43.017 ]' 00:05:43.017 18:12:54 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:43.017 18:12:54 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:43.017 18:12:54 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:43.017 18:12:54 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.017 18:12:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:43.017 18:12:55 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.017 18:12:55 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:43.017 18:12:55 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.017 18:12:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:43.017 18:12:55 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.017 18:12:55 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:43.017 18:12:55 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:43.278 ************************************ 00:05:43.278 END TEST rpc_plugins 00:05:43.278 ************************************ 00:05:43.278 18:12:55 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:43.278 00:05:43.278 real 0m0.163s 00:05:43.278 user 0m0.108s 00:05:43.278 sys 0m0.020s 00:05:43.278 18:12:55 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.278 18:12:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:43.278 18:12:55 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:43.278 18:12:55 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:43.278 18:12:55 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.278 18:12:55 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.278 18:12:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.278 ************************************ 00:05:43.278 START TEST rpc_trace_cmd_test 00:05:43.278 ************************************ 00:05:43.278 18:12:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:43.278 18:12:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:43.278 18:12:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:43.278 18:12:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.278 18:12:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:43.278 18:12:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.278 18:12:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:43.278 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid59588", 00:05:43.278 "tpoint_group_mask": "0x8", 00:05:43.278 "iscsi_conn": { 00:05:43.278 "mask": "0x2", 00:05:43.278 "tpoint_mask": "0x0" 00:05:43.278 }, 00:05:43.278 "scsi": { 00:05:43.278 "mask": "0x4", 00:05:43.278 "tpoint_mask": "0x0" 00:05:43.278 }, 00:05:43.278 "bdev": { 00:05:43.278 "mask": "0x8", 00:05:43.278 "tpoint_mask": "0xffffffffffffffff" 00:05:43.278 }, 00:05:43.278 "nvmf_rdma": { 00:05:43.278 "mask": "0x10", 00:05:43.278 "tpoint_mask": "0x0" 00:05:43.278 }, 00:05:43.278 "nvmf_tcp": { 00:05:43.278 "mask": "0x20", 00:05:43.278 "tpoint_mask": "0x0" 00:05:43.278 }, 00:05:43.278 "ftl": { 00:05:43.278 "mask": "0x40", 00:05:43.278 "tpoint_mask": "0x0" 00:05:43.278 }, 00:05:43.278 "blobfs": { 00:05:43.278 "mask": "0x80", 00:05:43.278 "tpoint_mask": "0x0" 00:05:43.278 }, 00:05:43.278 "dsa": { 00:05:43.278 "mask": "0x200", 00:05:43.278 "tpoint_mask": "0x0" 00:05:43.278 }, 00:05:43.278 "thread": { 00:05:43.278 "mask": "0x400", 00:05:43.278 "tpoint_mask": "0x0" 00:05:43.278 }, 00:05:43.278 "nvme_pcie": { 00:05:43.278 "mask": "0x800", 00:05:43.278 "tpoint_mask": "0x0" 00:05:43.278 }, 00:05:43.278 "iaa": { 00:05:43.278 "mask": "0x1000", 00:05:43.278 "tpoint_mask": "0x0" 00:05:43.278 }, 00:05:43.278 "nvme_tcp": { 00:05:43.278 "mask": "0x2000", 00:05:43.278 "tpoint_mask": "0x0" 00:05:43.278 }, 00:05:43.278 "bdev_nvme": { 00:05:43.278 "mask": "0x4000", 00:05:43.278 "tpoint_mask": "0x0" 00:05:43.278 }, 00:05:43.278 "sock": { 00:05:43.278 "mask": "0x8000", 00:05:43.278 "tpoint_mask": "0x0" 00:05:43.278 } 00:05:43.278 }' 00:05:43.278 18:12:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:43.278 18:12:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:43.278 18:12:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:43.278 18:12:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:43.278 18:12:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:43.537 18:12:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:43.537 18:12:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:43.537 18:12:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:43.537 18:12:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:43.537 ************************************ 00:05:43.537 END TEST rpc_trace_cmd_test 00:05:43.537 ************************************ 00:05:43.537 18:12:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:43.537 00:05:43.537 real 0m0.275s 00:05:43.537 user 0m0.244s 00:05:43.537 sys 0m0.022s 00:05:43.537 18:12:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.537 18:12:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:43.537 18:12:55 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:43.537 18:12:55 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:43.537 18:12:55 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:43.537 18:12:55 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:43.537 18:12:55 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.537 18:12:55 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.537 18:12:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.537 ************************************ 00:05:43.537 START TEST rpc_daemon_integrity 00:05:43.537 ************************************ 00:05:43.537 18:12:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:43.537 18:12:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:43.537 18:12:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.537 18:12:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.537 18:12:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.537 18:12:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:43.537 18:12:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:43.537 18:12:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:43.537 18:12:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:43.537 18:12:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.537 18:12:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.537 18:12:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.538 18:12:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:43.538 18:12:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:43.538 18:12:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.538 18:12:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.538 18:12:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.538 18:12:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:43.538 { 00:05:43.538 "name": "Malloc2", 00:05:43.538 "aliases": [ 00:05:43.538 "a30cf75b-d829-49d0-8382-7b86c0170df3" 00:05:43.538 ], 00:05:43.538 "product_name": "Malloc disk", 00:05:43.538 "block_size": 512, 00:05:43.538 "num_blocks": 16384, 00:05:43.538 "uuid": "a30cf75b-d829-49d0-8382-7b86c0170df3", 00:05:43.538 "assigned_rate_limits": { 00:05:43.538 "rw_ios_per_sec": 0, 00:05:43.538 "rw_mbytes_per_sec": 0, 00:05:43.538 "r_mbytes_per_sec": 0, 00:05:43.538 "w_mbytes_per_sec": 0 00:05:43.538 }, 00:05:43.538 "claimed": false, 00:05:43.538 "zoned": false, 00:05:43.538 "supported_io_types": { 00:05:43.538 "read": true, 00:05:43.538 "write": true, 00:05:43.538 "unmap": true, 00:05:43.538 "flush": true, 00:05:43.538 "reset": true, 00:05:43.538 "nvme_admin": false, 00:05:43.538 "nvme_io": false, 00:05:43.538 "nvme_io_md": false, 00:05:43.538 "write_zeroes": true, 00:05:43.538 "zcopy": true, 00:05:43.538 "get_zone_info": false, 00:05:43.538 "zone_management": false, 00:05:43.538 "zone_append": false, 00:05:43.538 "compare": false, 00:05:43.538 "compare_and_write": false, 00:05:43.538 "abort": true, 00:05:43.538 "seek_hole": false, 00:05:43.538 "seek_data": false, 00:05:43.538 "copy": true, 00:05:43.538 "nvme_iov_md": false 00:05:43.538 }, 00:05:43.538 "memory_domains": [ 00:05:43.538 { 00:05:43.538 "dma_device_id": "system", 00:05:43.538 "dma_device_type": 1 00:05:43.538 }, 00:05:43.538 { 00:05:43.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:43.538 "dma_device_type": 2 00:05:43.538 } 00:05:43.538 ], 00:05:43.538 "driver_specific": {} 00:05:43.538 } 00:05:43.538 ]' 00:05:43.538 18:12:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:43.796 18:12:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:43.796 18:12:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:43.796 18:12:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.796 18:12:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.796 [2024-07-22 18:12:55.579413] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:43.797 [2024-07-22 18:12:55.579507] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:43.797 [2024-07-22 18:12:55.579555] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:05:43.797 [2024-07-22 18:12:55.579619] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:43.797 [2024-07-22 18:12:55.582799] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:43.797 [2024-07-22 18:12:55.582852] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:43.797 Passthru0 00:05:43.797 18:12:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.797 18:12:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:43.797 18:12:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.797 18:12:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.797 18:12:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.797 18:12:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:43.797 { 00:05:43.797 "name": "Malloc2", 00:05:43.797 "aliases": [ 00:05:43.797 "a30cf75b-d829-49d0-8382-7b86c0170df3" 00:05:43.797 ], 00:05:43.797 "product_name": "Malloc disk", 00:05:43.797 "block_size": 512, 00:05:43.797 "num_blocks": 16384, 00:05:43.797 "uuid": "a30cf75b-d829-49d0-8382-7b86c0170df3", 00:05:43.797 "assigned_rate_limits": { 00:05:43.797 "rw_ios_per_sec": 0, 00:05:43.797 "rw_mbytes_per_sec": 0, 00:05:43.797 "r_mbytes_per_sec": 0, 00:05:43.797 "w_mbytes_per_sec": 0 00:05:43.797 }, 00:05:43.797 "claimed": true, 00:05:43.797 "claim_type": "exclusive_write", 00:05:43.797 "zoned": false, 00:05:43.797 "supported_io_types": { 00:05:43.797 "read": true, 00:05:43.797 "write": true, 00:05:43.797 "unmap": true, 00:05:43.797 "flush": true, 00:05:43.797 "reset": true, 00:05:43.797 "nvme_admin": false, 00:05:43.797 "nvme_io": false, 00:05:43.797 "nvme_io_md": false, 00:05:43.797 "write_zeroes": true, 00:05:43.797 "zcopy": true, 00:05:43.797 "get_zone_info": false, 00:05:43.797 "zone_management": false, 00:05:43.797 "zone_append": false, 00:05:43.797 "compare": false, 00:05:43.797 "compare_and_write": false, 00:05:43.797 "abort": true, 00:05:43.797 "seek_hole": false, 00:05:43.797 "seek_data": false, 00:05:43.797 "copy": true, 00:05:43.797 "nvme_iov_md": false 00:05:43.797 }, 00:05:43.797 "memory_domains": [ 00:05:43.797 { 00:05:43.797 "dma_device_id": "system", 00:05:43.797 "dma_device_type": 1 00:05:43.797 }, 00:05:43.797 { 00:05:43.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:43.797 "dma_device_type": 2 00:05:43.797 } 00:05:43.797 ], 00:05:43.797 "driver_specific": {} 00:05:43.797 }, 00:05:43.797 { 00:05:43.797 "name": "Passthru0", 00:05:43.797 "aliases": [ 00:05:43.797 "65614c24-0bc1-55b6-95d3-996b508ed228" 00:05:43.797 ], 00:05:43.797 "product_name": "passthru", 00:05:43.797 "block_size": 512, 00:05:43.797 "num_blocks": 16384, 00:05:43.797 "uuid": "65614c24-0bc1-55b6-95d3-996b508ed228", 00:05:43.797 "assigned_rate_limits": { 00:05:43.797 "rw_ios_per_sec": 0, 00:05:43.797 "rw_mbytes_per_sec": 0, 00:05:43.797 "r_mbytes_per_sec": 0, 00:05:43.797 "w_mbytes_per_sec": 0 00:05:43.797 }, 00:05:43.797 "claimed": false, 00:05:43.797 "zoned": false, 00:05:43.797 "supported_io_types": { 00:05:43.797 "read": true, 00:05:43.797 "write": true, 00:05:43.797 "unmap": true, 00:05:43.797 "flush": true, 00:05:43.797 "reset": true, 00:05:43.797 "nvme_admin": false, 00:05:43.797 "nvme_io": false, 00:05:43.797 "nvme_io_md": false, 00:05:43.797 "write_zeroes": true, 00:05:43.797 "zcopy": true, 00:05:43.797 "get_zone_info": false, 00:05:43.797 "zone_management": false, 00:05:43.797 "zone_append": false, 00:05:43.797 "compare": false, 00:05:43.797 "compare_and_write": false, 00:05:43.797 "abort": true, 00:05:43.797 "seek_hole": false, 00:05:43.797 "seek_data": false, 00:05:43.797 "copy": true, 00:05:43.797 "nvme_iov_md": false 00:05:43.797 }, 00:05:43.797 "memory_domains": [ 00:05:43.797 { 00:05:43.797 "dma_device_id": "system", 00:05:43.797 "dma_device_type": 1 00:05:43.797 }, 00:05:43.797 { 00:05:43.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:43.797 "dma_device_type": 2 00:05:43.797 } 00:05:43.797 ], 00:05:43.797 "driver_specific": { 00:05:43.797 "passthru": { 00:05:43.797 "name": "Passthru0", 00:05:43.797 "base_bdev_name": "Malloc2" 00:05:43.797 } 00:05:43.797 } 00:05:43.797 } 00:05:43.797 ]' 00:05:43.797 18:12:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:43.797 18:12:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:43.797 18:12:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:43.797 18:12:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.797 18:12:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.797 18:12:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.797 18:12:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:43.797 18:12:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.797 18:12:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.797 18:12:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.797 18:12:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:43.797 18:12:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:43.798 18:12:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.798 18:12:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:43.798 18:12:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:43.798 18:12:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:43.798 ************************************ 00:05:43.798 END TEST rpc_daemon_integrity 00:05:43.798 ************************************ 00:05:43.798 18:12:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:43.798 00:05:43.798 real 0m0.334s 00:05:43.798 user 0m0.199s 00:05:43.798 sys 0m0.041s 00:05:43.798 18:12:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.798 18:12:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:43.798 18:12:55 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:43.798 18:12:55 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:43.798 18:12:55 rpc -- rpc/rpc.sh@84 -- # killprocess 59588 00:05:43.798 18:12:55 rpc -- common/autotest_common.sh@948 -- # '[' -z 59588 ']' 00:05:43.798 18:12:55 rpc -- common/autotest_common.sh@952 -- # kill -0 59588 00:05:43.798 18:12:55 rpc -- common/autotest_common.sh@953 -- # uname 00:05:43.798 18:12:55 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:43.798 18:12:55 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59588 00:05:44.056 killing process with pid 59588 00:05:44.056 18:12:55 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:44.056 18:12:55 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:44.056 18:12:55 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59588' 00:05:44.056 18:12:55 rpc -- common/autotest_common.sh@967 -- # kill 59588 00:05:44.056 18:12:55 rpc -- common/autotest_common.sh@972 -- # wait 59588 00:05:46.585 00:05:46.585 real 0m5.030s 00:05:46.585 user 0m5.663s 00:05:46.585 sys 0m0.839s 00:05:46.585 18:12:58 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.585 18:12:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.585 ************************************ 00:05:46.585 END TEST rpc 00:05:46.585 ************************************ 00:05:46.585 18:12:58 -- common/autotest_common.sh@1142 -- # return 0 00:05:46.585 18:12:58 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:46.585 18:12:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:46.585 18:12:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.585 18:12:58 -- common/autotest_common.sh@10 -- # set +x 00:05:46.585 ************************************ 00:05:46.585 START TEST skip_rpc 00:05:46.585 ************************************ 00:05:46.585 18:12:58 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:46.585 * Looking for test storage... 00:05:46.585 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:46.585 18:12:58 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:46.585 18:12:58 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:46.585 18:12:58 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:46.585 18:12:58 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:46.585 18:12:58 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.585 18:12:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.585 ************************************ 00:05:46.585 START TEST skip_rpc 00:05:46.585 ************************************ 00:05:46.585 18:12:58 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:46.585 18:12:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=59809 00:05:46.585 18:12:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:46.585 18:12:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:46.585 18:12:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:46.586 [2024-07-22 18:12:58.344186] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:05:46.586 [2024-07-22 18:12:58.344406] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59809 ] 00:05:46.586 [2024-07-22 18:12:58.522998] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.844 [2024-07-22 18:12:58.802510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.102 [2024-07-22 18:12:59.008396] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:51.285 18:13:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:51.285 18:13:03 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:51.285 18:13:03 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:51.285 18:13:03 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:51.285 18:13:03 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:51.285 18:13:03 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:51.285 18:13:03 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:51.285 18:13:03 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:51.285 18:13:03 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.285 18:13:03 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.285 18:13:03 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:51.285 18:13:03 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:51.285 18:13:03 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:51.285 18:13:03 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:51.285 18:13:03 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:51.285 18:13:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:51.285 18:13:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 59809 00:05:51.285 18:13:03 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 59809 ']' 00:05:51.285 18:13:03 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 59809 00:05:51.285 18:13:03 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:51.285 18:13:03 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:51.285 18:13:03 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59809 00:05:51.285 18:13:03 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:51.285 18:13:03 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:51.285 18:13:03 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59809' 00:05:51.285 killing process with pid 59809 00:05:51.285 18:13:03 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 59809 00:05:51.285 18:13:03 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 59809 00:05:53.816 00:05:53.816 real 0m7.272s 00:05:53.816 user 0m6.704s 00:05:53.816 sys 0m0.456s 00:05:53.816 18:13:05 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.816 ************************************ 00:05:53.816 END TEST skip_rpc 00:05:53.816 ************************************ 00:05:53.816 18:13:05 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.816 18:13:05 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:53.816 18:13:05 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:53.816 18:13:05 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:53.816 18:13:05 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.816 18:13:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.816 ************************************ 00:05:53.816 START TEST skip_rpc_with_json 00:05:53.816 ************************************ 00:05:53.816 18:13:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:53.816 18:13:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:53.816 18:13:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=59914 00:05:53.816 18:13:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:53.816 18:13:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:53.816 18:13:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 59914 00:05:53.816 18:13:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 59914 ']' 00:05:53.816 18:13:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.816 18:13:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.816 18:13:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.816 18:13:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.816 18:13:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:53.816 [2024-07-22 18:13:05.665440] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:05:53.816 [2024-07-22 18:13:05.665621] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59914 ] 00:05:54.074 [2024-07-22 18:13:05.854773] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.333 [2024-07-22 18:13:06.154839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.592 [2024-07-22 18:13:06.369180] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:55.228 18:13:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:55.228 18:13:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:55.228 18:13:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:55.228 18:13:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.228 18:13:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:55.228 [2024-07-22 18:13:06.981055] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:55.228 request: 00:05:55.228 { 00:05:55.228 "trtype": "tcp", 00:05:55.228 "method": "nvmf_get_transports", 00:05:55.228 "req_id": 1 00:05:55.228 } 00:05:55.228 Got JSON-RPC error response 00:05:55.228 response: 00:05:55.228 { 00:05:55.228 "code": -19, 00:05:55.228 "message": "No such device" 00:05:55.228 } 00:05:55.228 18:13:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:55.228 18:13:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:55.228 18:13:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.228 18:13:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:55.228 [2024-07-22 18:13:06.993235] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:55.228 18:13:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.228 18:13:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:55.228 18:13:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.228 18:13:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:55.228 18:13:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.228 18:13:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:55.228 { 00:05:55.228 "subsystems": [ 00:05:55.228 { 00:05:55.228 "subsystem": "vfio_user_target", 00:05:55.228 "config": null 00:05:55.228 }, 00:05:55.228 { 00:05:55.228 "subsystem": "keyring", 00:05:55.228 "config": [] 00:05:55.228 }, 00:05:55.228 { 00:05:55.228 "subsystem": "iobuf", 00:05:55.228 "config": [ 00:05:55.228 { 00:05:55.228 "method": "iobuf_set_options", 00:05:55.228 "params": { 00:05:55.228 "small_pool_count": 8192, 00:05:55.228 "large_pool_count": 1024, 00:05:55.228 "small_bufsize": 8192, 00:05:55.228 "large_bufsize": 135168 00:05:55.228 } 00:05:55.228 } 00:05:55.228 ] 00:05:55.228 }, 00:05:55.228 { 00:05:55.228 "subsystem": "sock", 00:05:55.228 "config": [ 00:05:55.228 { 00:05:55.228 "method": "sock_set_default_impl", 00:05:55.228 "params": { 00:05:55.228 "impl_name": "uring" 00:05:55.228 } 00:05:55.228 }, 00:05:55.228 { 00:05:55.228 "method": "sock_impl_set_options", 00:05:55.228 "params": { 00:05:55.228 "impl_name": "ssl", 00:05:55.228 "recv_buf_size": 4096, 00:05:55.228 "send_buf_size": 4096, 00:05:55.228 "enable_recv_pipe": true, 00:05:55.228 "enable_quickack": false, 00:05:55.228 "enable_placement_id": 0, 00:05:55.228 "enable_zerocopy_send_server": true, 00:05:55.228 "enable_zerocopy_send_client": false, 00:05:55.228 "zerocopy_threshold": 0, 00:05:55.228 "tls_version": 0, 00:05:55.228 "enable_ktls": false 00:05:55.228 } 00:05:55.228 }, 00:05:55.228 { 00:05:55.228 "method": "sock_impl_set_options", 00:05:55.228 "params": { 00:05:55.228 "impl_name": "posix", 00:05:55.228 "recv_buf_size": 2097152, 00:05:55.228 "send_buf_size": 2097152, 00:05:55.228 "enable_recv_pipe": true, 00:05:55.228 "enable_quickack": false, 00:05:55.228 "enable_placement_id": 0, 00:05:55.228 "enable_zerocopy_send_server": true, 00:05:55.228 "enable_zerocopy_send_client": false, 00:05:55.228 "zerocopy_threshold": 0, 00:05:55.228 "tls_version": 0, 00:05:55.228 "enable_ktls": false 00:05:55.228 } 00:05:55.228 }, 00:05:55.228 { 00:05:55.228 "method": "sock_impl_set_options", 00:05:55.228 "params": { 00:05:55.228 "impl_name": "uring", 00:05:55.228 "recv_buf_size": 2097152, 00:05:55.228 "send_buf_size": 2097152, 00:05:55.228 "enable_recv_pipe": true, 00:05:55.228 "enable_quickack": false, 00:05:55.228 "enable_placement_id": 0, 00:05:55.228 "enable_zerocopy_send_server": false, 00:05:55.228 "enable_zerocopy_send_client": false, 00:05:55.228 "zerocopy_threshold": 0, 00:05:55.228 "tls_version": 0, 00:05:55.229 "enable_ktls": false 00:05:55.229 } 00:05:55.229 } 00:05:55.229 ] 00:05:55.229 }, 00:05:55.229 { 00:05:55.229 "subsystem": "vmd", 00:05:55.229 "config": [] 00:05:55.229 }, 00:05:55.229 { 00:05:55.229 "subsystem": "accel", 00:05:55.229 "config": [ 00:05:55.229 { 00:05:55.229 "method": "accel_set_options", 00:05:55.229 "params": { 00:05:55.229 "small_cache_size": 128, 00:05:55.229 "large_cache_size": 16, 00:05:55.229 "task_count": 2048, 00:05:55.229 "sequence_count": 2048, 00:05:55.229 "buf_count": 2048 00:05:55.229 } 00:05:55.229 } 00:05:55.229 ] 00:05:55.229 }, 00:05:55.229 { 00:05:55.229 "subsystem": "bdev", 00:05:55.229 "config": [ 00:05:55.229 { 00:05:55.229 "method": "bdev_set_options", 00:05:55.229 "params": { 00:05:55.229 "bdev_io_pool_size": 65535, 00:05:55.229 "bdev_io_cache_size": 256, 00:05:55.229 "bdev_auto_examine": true, 00:05:55.229 "iobuf_small_cache_size": 128, 00:05:55.229 "iobuf_large_cache_size": 16 00:05:55.229 } 00:05:55.229 }, 00:05:55.229 { 00:05:55.229 "method": "bdev_raid_set_options", 00:05:55.229 "params": { 00:05:55.229 "process_window_size_kb": 1024, 00:05:55.229 "process_max_bandwidth_mb_sec": 0 00:05:55.229 } 00:05:55.229 }, 00:05:55.229 { 00:05:55.229 "method": "bdev_iscsi_set_options", 00:05:55.229 "params": { 00:05:55.229 "timeout_sec": 30 00:05:55.229 } 00:05:55.229 }, 00:05:55.229 { 00:05:55.229 "method": "bdev_nvme_set_options", 00:05:55.229 "params": { 00:05:55.229 "action_on_timeout": "none", 00:05:55.229 "timeout_us": 0, 00:05:55.229 "timeout_admin_us": 0, 00:05:55.229 "keep_alive_timeout_ms": 10000, 00:05:55.229 "arbitration_burst": 0, 00:05:55.229 "low_priority_weight": 0, 00:05:55.229 "medium_priority_weight": 0, 00:05:55.229 "high_priority_weight": 0, 00:05:55.229 "nvme_adminq_poll_period_us": 10000, 00:05:55.229 "nvme_ioq_poll_period_us": 0, 00:05:55.229 "io_queue_requests": 0, 00:05:55.229 "delay_cmd_submit": true, 00:05:55.229 "transport_retry_count": 4, 00:05:55.229 "bdev_retry_count": 3, 00:05:55.229 "transport_ack_timeout": 0, 00:05:55.229 "ctrlr_loss_timeout_sec": 0, 00:05:55.229 "reconnect_delay_sec": 0, 00:05:55.229 "fast_io_fail_timeout_sec": 0, 00:05:55.229 "disable_auto_failback": false, 00:05:55.229 "generate_uuids": false, 00:05:55.229 "transport_tos": 0, 00:05:55.229 "nvme_error_stat": false, 00:05:55.229 "rdma_srq_size": 0, 00:05:55.229 "io_path_stat": false, 00:05:55.229 "allow_accel_sequence": false, 00:05:55.229 "rdma_max_cq_size": 0, 00:05:55.229 "rdma_cm_event_timeout_ms": 0, 00:05:55.229 "dhchap_digests": [ 00:05:55.229 "sha256", 00:05:55.229 "sha384", 00:05:55.229 "sha512" 00:05:55.229 ], 00:05:55.229 "dhchap_dhgroups": [ 00:05:55.229 "null", 00:05:55.229 "ffdhe2048", 00:05:55.229 "ffdhe3072", 00:05:55.229 "ffdhe4096", 00:05:55.229 "ffdhe6144", 00:05:55.229 "ffdhe8192" 00:05:55.229 ] 00:05:55.229 } 00:05:55.229 }, 00:05:55.229 { 00:05:55.229 "method": "bdev_nvme_set_hotplug", 00:05:55.229 "params": { 00:05:55.229 "period_us": 100000, 00:05:55.229 "enable": false 00:05:55.229 } 00:05:55.229 }, 00:05:55.229 { 00:05:55.229 "method": "bdev_wait_for_examine" 00:05:55.229 } 00:05:55.229 ] 00:05:55.229 }, 00:05:55.229 { 00:05:55.229 "subsystem": "scsi", 00:05:55.229 "config": null 00:05:55.229 }, 00:05:55.229 { 00:05:55.229 "subsystem": "scheduler", 00:05:55.229 "config": [ 00:05:55.229 { 00:05:55.229 "method": "framework_set_scheduler", 00:05:55.229 "params": { 00:05:55.229 "name": "static" 00:05:55.229 } 00:05:55.229 } 00:05:55.229 ] 00:05:55.229 }, 00:05:55.229 { 00:05:55.229 "subsystem": "vhost_scsi", 00:05:55.229 "config": [] 00:05:55.229 }, 00:05:55.229 { 00:05:55.229 "subsystem": "vhost_blk", 00:05:55.229 "config": [] 00:05:55.229 }, 00:05:55.229 { 00:05:55.229 "subsystem": "ublk", 00:05:55.229 "config": [] 00:05:55.229 }, 00:05:55.229 { 00:05:55.229 "subsystem": "nbd", 00:05:55.229 "config": [] 00:05:55.229 }, 00:05:55.229 { 00:05:55.229 "subsystem": "nvmf", 00:05:55.229 "config": [ 00:05:55.229 { 00:05:55.229 "method": "nvmf_set_config", 00:05:55.229 "params": { 00:05:55.229 "discovery_filter": "match_any", 00:05:55.229 "admin_cmd_passthru": { 00:05:55.229 "identify_ctrlr": false 00:05:55.229 } 00:05:55.229 } 00:05:55.229 }, 00:05:55.229 { 00:05:55.229 "method": "nvmf_set_max_subsystems", 00:05:55.229 "params": { 00:05:55.229 "max_subsystems": 1024 00:05:55.229 } 00:05:55.229 }, 00:05:55.229 { 00:05:55.229 "method": "nvmf_set_crdt", 00:05:55.229 "params": { 00:05:55.229 "crdt1": 0, 00:05:55.229 "crdt2": 0, 00:05:55.229 "crdt3": 0 00:05:55.229 } 00:05:55.229 }, 00:05:55.229 { 00:05:55.229 "method": "nvmf_create_transport", 00:05:55.229 "params": { 00:05:55.229 "trtype": "TCP", 00:05:55.229 "max_queue_depth": 128, 00:05:55.229 "max_io_qpairs_per_ctrlr": 127, 00:05:55.229 "in_capsule_data_size": 4096, 00:05:55.229 "max_io_size": 131072, 00:05:55.229 "io_unit_size": 131072, 00:05:55.229 "max_aq_depth": 128, 00:05:55.229 "num_shared_buffers": 511, 00:05:55.229 "buf_cache_size": 4294967295, 00:05:55.229 "dif_insert_or_strip": false, 00:05:55.229 "zcopy": false, 00:05:55.229 "c2h_success": true, 00:05:55.229 "sock_priority": 0, 00:05:55.229 "abort_timeout_sec": 1, 00:05:55.229 "ack_timeout": 0, 00:05:55.229 "data_wr_pool_size": 0 00:05:55.229 } 00:05:55.229 } 00:05:55.229 ] 00:05:55.229 }, 00:05:55.229 { 00:05:55.229 "subsystem": "iscsi", 00:05:55.229 "config": [ 00:05:55.229 { 00:05:55.229 "method": "iscsi_set_options", 00:05:55.229 "params": { 00:05:55.229 "node_base": "iqn.2016-06.io.spdk", 00:05:55.229 "max_sessions": 128, 00:05:55.229 "max_connections_per_session": 2, 00:05:55.229 "max_queue_depth": 64, 00:05:55.229 "default_time2wait": 2, 00:05:55.229 "default_time2retain": 20, 00:05:55.229 "first_burst_length": 8192, 00:05:55.229 "immediate_data": true, 00:05:55.229 "allow_duplicated_isid": false, 00:05:55.229 "error_recovery_level": 0, 00:05:55.229 "nop_timeout": 60, 00:05:55.229 "nop_in_interval": 30, 00:05:55.229 "disable_chap": false, 00:05:55.229 "require_chap": false, 00:05:55.229 "mutual_chap": false, 00:05:55.229 "chap_group": 0, 00:05:55.229 "max_large_datain_per_connection": 64, 00:05:55.229 "max_r2t_per_connection": 4, 00:05:55.229 "pdu_pool_size": 36864, 00:05:55.229 "immediate_data_pool_size": 16384, 00:05:55.229 "data_out_pool_size": 2048 00:05:55.229 } 00:05:55.229 } 00:05:55.229 ] 00:05:55.229 } 00:05:55.229 ] 00:05:55.229 } 00:05:55.229 18:13:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:55.229 18:13:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 59914 00:05:55.229 18:13:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 59914 ']' 00:05:55.229 18:13:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 59914 00:05:55.229 18:13:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:55.229 18:13:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:55.229 18:13:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59914 00:05:55.229 killing process with pid 59914 00:05:55.229 18:13:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:55.229 18:13:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:55.229 18:13:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59914' 00:05:55.229 18:13:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 59914 00:05:55.229 18:13:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 59914 00:05:57.768 18:13:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=59969 00:05:57.768 18:13:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:57.768 18:13:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:03.031 18:13:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 59969 00:06:03.031 18:13:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 59969 ']' 00:06:03.031 18:13:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 59969 00:06:03.031 18:13:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:06:03.031 18:13:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:03.031 18:13:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59969 00:06:03.031 killing process with pid 59969 00:06:03.031 18:13:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:03.031 18:13:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:03.031 18:13:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59969' 00:06:03.031 18:13:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 59969 00:06:03.031 18:13:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 59969 00:06:04.931 18:13:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:04.931 18:13:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:04.931 ************************************ 00:06:04.931 END TEST skip_rpc_with_json 00:06:04.931 ************************************ 00:06:04.931 00:06:04.931 real 0m11.169s 00:06:04.931 user 0m10.590s 00:06:04.931 sys 0m0.970s 00:06:04.931 18:13:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.931 18:13:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:04.931 18:13:16 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:04.932 18:13:16 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:04.932 18:13:16 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:04.932 18:13:16 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.932 18:13:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.932 ************************************ 00:06:04.932 START TEST skip_rpc_with_delay 00:06:04.932 ************************************ 00:06:04.932 18:13:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:06:04.932 18:13:16 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:04.932 18:13:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:06:04.932 18:13:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:04.932 18:13:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:04.932 18:13:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:04.932 18:13:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:04.932 18:13:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:04.932 18:13:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:04.932 18:13:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:04.932 18:13:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:04.932 18:13:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:04.932 18:13:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:04.932 [2024-07-22 18:13:16.880185] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:04.932 [2024-07-22 18:13:16.880358] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:04.932 18:13:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:06:04.932 18:13:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:04.932 ************************************ 00:06:04.932 END TEST skip_rpc_with_delay 00:06:04.932 ************************************ 00:06:04.932 18:13:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:04.932 18:13:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:04.932 00:06:04.932 real 0m0.192s 00:06:04.932 user 0m0.098s 00:06:04.932 sys 0m0.092s 00:06:04.932 18:13:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.932 18:13:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:05.190 18:13:16 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:05.190 18:13:16 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:05.190 18:13:16 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:05.190 18:13:16 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:05.190 18:13:16 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:05.190 18:13:16 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.190 18:13:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.190 ************************************ 00:06:05.190 START TEST exit_on_failed_rpc_init 00:06:05.190 ************************************ 00:06:05.190 18:13:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:06:05.190 18:13:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=60103 00:06:05.190 18:13:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 60103 00:06:05.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.190 18:13:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 60103 ']' 00:06:05.190 18:13:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:05.190 18:13:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.190 18:13:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:05.190 18:13:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.190 18:13:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:05.190 18:13:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:05.190 [2024-07-22 18:13:17.127201] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:05.190 [2024-07-22 18:13:17.127474] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60103 ] 00:06:05.448 [2024-07-22 18:13:17.306650] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.706 [2024-07-22 18:13:17.548480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.965 [2024-07-22 18:13:17.756341] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:06.531 18:13:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.531 18:13:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:06:06.531 18:13:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:06.531 18:13:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:06.531 18:13:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:06:06.531 18:13:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:06.531 18:13:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:06.531 18:13:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:06.531 18:13:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:06.531 18:13:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:06.531 18:13:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:06.531 18:13:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:06.531 18:13:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:06.531 18:13:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:06.531 18:13:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:06.531 [2024-07-22 18:13:18.483341] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:06.531 [2024-07-22 18:13:18.483549] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60126 ] 00:06:06.789 [2024-07-22 18:13:18.660270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.047 [2024-07-22 18:13:18.946840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.047 [2024-07-22 18:13:18.946963] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:07.047 [2024-07-22 18:13:18.946990] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:07.047 [2024-07-22 18:13:18.947016] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:07.614 18:13:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:06:07.614 18:13:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:07.614 18:13:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:06:07.614 18:13:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:06:07.614 18:13:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:06:07.614 18:13:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:07.614 18:13:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:07.614 18:13:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 60103 00:06:07.614 18:13:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 60103 ']' 00:06:07.614 18:13:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 60103 00:06:07.614 18:13:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:06:07.614 18:13:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:07.614 18:13:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60103 00:06:07.614 killing process with pid 60103 00:06:07.614 18:13:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:07.614 18:13:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:07.614 18:13:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60103' 00:06:07.614 18:13:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 60103 00:06:07.614 18:13:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 60103 00:06:10.146 00:06:10.146 real 0m4.695s 00:06:10.146 user 0m5.428s 00:06:10.146 sys 0m0.676s 00:06:10.146 ************************************ 00:06:10.146 END TEST exit_on_failed_rpc_init 00:06:10.146 ************************************ 00:06:10.146 18:13:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.146 18:13:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:10.146 18:13:21 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:10.146 18:13:21 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:10.146 ************************************ 00:06:10.146 END TEST skip_rpc 00:06:10.146 ************************************ 00:06:10.146 00:06:10.146 real 0m23.614s 00:06:10.146 user 0m22.920s 00:06:10.146 sys 0m2.373s 00:06:10.146 18:13:21 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.146 18:13:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.146 18:13:21 -- common/autotest_common.sh@1142 -- # return 0 00:06:10.146 18:13:21 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:10.146 18:13:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:10.146 18:13:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.146 18:13:21 -- common/autotest_common.sh@10 -- # set +x 00:06:10.146 ************************************ 00:06:10.146 START TEST rpc_client 00:06:10.146 ************************************ 00:06:10.146 18:13:21 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:10.146 * Looking for test storage... 00:06:10.146 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:10.146 18:13:21 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:10.147 OK 00:06:10.147 18:13:21 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:10.147 00:06:10.147 real 0m0.149s 00:06:10.147 user 0m0.055s 00:06:10.147 sys 0m0.097s 00:06:10.147 18:13:21 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.147 18:13:21 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:10.147 ************************************ 00:06:10.147 END TEST rpc_client 00:06:10.147 ************************************ 00:06:10.147 18:13:21 -- common/autotest_common.sh@1142 -- # return 0 00:06:10.147 18:13:21 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:10.147 18:13:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:10.147 18:13:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.147 18:13:21 -- common/autotest_common.sh@10 -- # set +x 00:06:10.147 ************************************ 00:06:10.147 START TEST json_config 00:06:10.147 ************************************ 00:06:10.147 18:13:21 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:10.147 18:13:22 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:10.147 18:13:22 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:10.147 18:13:22 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:10.147 18:13:22 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:10.147 18:13:22 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:10.147 18:13:22 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:10.147 18:13:22 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:10.147 18:13:22 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:10.147 18:13:22 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:10.147 18:13:22 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:10.147 18:13:22 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:10.147 18:13:22 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:10.147 18:13:22 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:06:10.147 18:13:22 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=1e224894-a0fc-4112-b81b-a37606f50c96 00:06:10.147 18:13:22 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:10.147 18:13:22 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:10.147 18:13:22 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:10.147 18:13:22 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:10.147 18:13:22 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:10.147 18:13:22 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:10.147 18:13:22 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:10.147 18:13:22 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:10.147 18:13:22 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.147 18:13:22 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.147 18:13:22 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.147 18:13:22 json_config -- paths/export.sh@5 -- # export PATH 00:06:10.147 18:13:22 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.147 18:13:22 json_config -- nvmf/common.sh@47 -- # : 0 00:06:10.147 18:13:22 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:10.147 18:13:22 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:10.147 18:13:22 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:10.147 18:13:22 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:10.147 18:13:22 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:10.147 18:13:22 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:10.147 18:13:22 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:10.147 18:13:22 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:10.147 18:13:22 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:10.147 18:13:22 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:10.147 18:13:22 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:10.147 18:13:22 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:10.147 18:13:22 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:10.147 18:13:22 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:10.147 18:13:22 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:10.147 18:13:22 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:10.147 18:13:22 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:10.147 18:13:22 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:10.147 18:13:22 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:10.147 18:13:22 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:06:10.147 18:13:22 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:10.147 18:13:22 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:10.147 18:13:22 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:10.147 18:13:22 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:06:10.147 INFO: JSON configuration test init 00:06:10.147 18:13:22 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:06:10.147 18:13:22 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:06:10.147 18:13:22 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:10.147 18:13:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.147 18:13:22 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:06:10.147 18:13:22 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:10.147 18:13:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.147 Waiting for target to run... 00:06:10.147 18:13:22 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:06:10.147 18:13:22 json_config -- json_config/common.sh@9 -- # local app=target 00:06:10.147 18:13:22 json_config -- json_config/common.sh@10 -- # shift 00:06:10.147 18:13:22 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:10.147 18:13:22 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:10.147 18:13:22 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:10.147 18:13:22 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:10.147 18:13:22 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:10.147 18:13:22 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=60275 00:06:10.147 18:13:22 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:10.147 18:13:22 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:10.147 18:13:22 json_config -- json_config/common.sh@25 -- # waitforlisten 60275 /var/tmp/spdk_tgt.sock 00:06:10.147 18:13:22 json_config -- common/autotest_common.sh@829 -- # '[' -z 60275 ']' 00:06:10.147 18:13:22 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:10.147 18:13:22 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:10.147 18:13:22 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:10.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:10.147 18:13:22 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:10.147 18:13:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.406 [2024-07-22 18:13:22.225157] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:10.406 [2024-07-22 18:13:22.225577] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60275 ] 00:06:10.972 [2024-07-22 18:13:22.682907] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.972 [2024-07-22 18:13:22.935571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.231 18:13:23 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:11.231 18:13:23 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:11.231 00:06:11.231 18:13:23 json_config -- json_config/common.sh@26 -- # echo '' 00:06:11.231 18:13:23 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:06:11.231 18:13:23 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:06:11.231 18:13:23 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:11.231 18:13:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:11.231 18:13:23 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:06:11.231 18:13:23 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:06:11.231 18:13:23 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:11.231 18:13:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:11.231 18:13:23 json_config -- json_config/json_config.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:11.231 18:13:23 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:06:11.231 18:13:23 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:11.804 [2024-07-22 18:13:23.665887] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:12.369 18:13:24 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:06:12.369 18:13:24 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:12.369 18:13:24 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:12.369 18:13:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:12.369 18:13:24 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:12.369 18:13:24 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:12.369 18:13:24 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:12.369 18:13:24 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:12.369 18:13:24 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:12.369 18:13:24 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:12.625 18:13:24 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:12.625 18:13:24 json_config -- json_config/json_config.sh@48 -- # local get_types 00:06:12.625 18:13:24 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:06:12.625 18:13:24 json_config -- json_config/json_config.sh@51 -- # sort 00:06:12.625 18:13:24 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:06:12.625 18:13:24 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:06:12.625 18:13:24 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:06:12.625 18:13:24 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:06:12.625 18:13:24 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:06:12.625 18:13:24 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:06:12.625 18:13:24 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:12.625 18:13:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:12.625 18:13:24 json_config -- json_config/json_config.sh@59 -- # return 0 00:06:12.625 18:13:24 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:12.625 18:13:24 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:12.625 18:13:24 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:06:12.625 18:13:24 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:06:12.625 18:13:24 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:06:12.625 18:13:24 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:06:12.625 18:13:24 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:12.625 18:13:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:12.625 18:13:24 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:12.625 18:13:24 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:06:12.625 18:13:24 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:06:12.625 18:13:24 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:12.625 18:13:24 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:12.882 MallocForNvmf0 00:06:12.882 18:13:24 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:12.882 18:13:24 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:13.139 MallocForNvmf1 00:06:13.139 18:13:25 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:13.139 18:13:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:13.397 [2024-07-22 18:13:25.383864] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:13.397 18:13:25 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:13.397 18:13:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:13.654 18:13:25 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:13.654 18:13:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:13.911 18:13:25 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:13.911 18:13:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:14.169 18:13:26 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:14.169 18:13:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:14.428 [2024-07-22 18:13:26.344697] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:14.428 18:13:26 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:06:14.428 18:13:26 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:14.428 18:13:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:14.428 18:13:26 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:06:14.428 18:13:26 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:14.428 18:13:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:14.686 18:13:26 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:06:14.686 18:13:26 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:14.686 18:13:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:14.686 MallocBdevForConfigChangeCheck 00:06:14.686 18:13:26 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:06:14.686 18:13:26 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:14.686 18:13:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:14.944 18:13:26 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:06:14.944 18:13:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:15.202 INFO: shutting down applications... 00:06:15.202 18:13:27 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:06:15.202 18:13:27 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:06:15.202 18:13:27 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:06:15.202 18:13:27 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:06:15.202 18:13:27 json_config -- json_config/json_config.sh@337 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:15.459 Calling clear_iscsi_subsystem 00:06:15.459 Calling clear_nvmf_subsystem 00:06:15.459 Calling clear_nbd_subsystem 00:06:15.459 Calling clear_ublk_subsystem 00:06:15.459 Calling clear_vhost_blk_subsystem 00:06:15.459 Calling clear_vhost_scsi_subsystem 00:06:15.459 Calling clear_bdev_subsystem 00:06:15.459 18:13:27 json_config -- json_config/json_config.sh@341 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:06:15.459 18:13:27 json_config -- json_config/json_config.sh@347 -- # count=100 00:06:15.459 18:13:27 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:06:15.459 18:13:27 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:15.459 18:13:27 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:15.459 18:13:27 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:06:16.023 18:13:27 json_config -- json_config/json_config.sh@349 -- # break 00:06:16.023 18:13:27 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:06:16.023 18:13:27 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:06:16.023 18:13:27 json_config -- json_config/common.sh@31 -- # local app=target 00:06:16.023 18:13:27 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:16.023 18:13:27 json_config -- json_config/common.sh@35 -- # [[ -n 60275 ]] 00:06:16.023 18:13:27 json_config -- json_config/common.sh@38 -- # kill -SIGINT 60275 00:06:16.023 18:13:27 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:16.023 18:13:27 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:16.023 18:13:27 json_config -- json_config/common.sh@41 -- # kill -0 60275 00:06:16.023 18:13:27 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:16.587 18:13:28 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:16.587 18:13:28 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:16.587 18:13:28 json_config -- json_config/common.sh@41 -- # kill -0 60275 00:06:16.587 18:13:28 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:17.206 18:13:28 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:17.206 18:13:28 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:17.206 18:13:28 json_config -- json_config/common.sh@41 -- # kill -0 60275 00:06:17.206 18:13:28 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:17.463 18:13:29 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:17.463 SPDK target shutdown done 00:06:17.463 INFO: relaunching applications... 00:06:17.463 18:13:29 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:17.463 18:13:29 json_config -- json_config/common.sh@41 -- # kill -0 60275 00:06:17.463 18:13:29 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:17.463 18:13:29 json_config -- json_config/common.sh@43 -- # break 00:06:17.463 18:13:29 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:17.463 18:13:29 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:17.463 18:13:29 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:06:17.463 18:13:29 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:17.463 18:13:29 json_config -- json_config/common.sh@9 -- # local app=target 00:06:17.463 Waiting for target to run... 00:06:17.463 18:13:29 json_config -- json_config/common.sh@10 -- # shift 00:06:17.463 18:13:29 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:17.463 18:13:29 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:17.463 18:13:29 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:17.463 18:13:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:17.463 18:13:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:17.463 18:13:29 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=60490 00:06:17.463 18:13:29 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:17.463 18:13:29 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:17.463 18:13:29 json_config -- json_config/common.sh@25 -- # waitforlisten 60490 /var/tmp/spdk_tgt.sock 00:06:17.463 18:13:29 json_config -- common/autotest_common.sh@829 -- # '[' -z 60490 ']' 00:06:17.463 18:13:29 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:17.463 18:13:29 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:17.463 18:13:29 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:17.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:17.463 18:13:29 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:17.463 18:13:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:17.719 [2024-07-22 18:13:29.559901] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:17.720 [2024-07-22 18:13:29.560419] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60490 ] 00:06:18.283 [2024-07-22 18:13:30.004832] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.540 [2024-07-22 18:13:30.301644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.797 [2024-07-22 18:13:30.609638] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:19.361 [2024-07-22 18:13:31.339873] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:19.361 [2024-07-22 18:13:31.372067] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:19.619 00:06:19.619 INFO: Checking if target configuration is the same... 00:06:19.619 18:13:31 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:19.619 18:13:31 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:19.619 18:13:31 json_config -- json_config/common.sh@26 -- # echo '' 00:06:19.619 18:13:31 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:06:19.619 18:13:31 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:19.619 18:13:31 json_config -- json_config/json_config.sh@382 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:19.619 18:13:31 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:06:19.619 18:13:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:19.619 + '[' 2 -ne 2 ']' 00:06:19.619 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:19.619 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:19.619 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:19.619 +++ basename /dev/fd/62 00:06:19.619 ++ mktemp /tmp/62.XXX 00:06:19.619 + tmp_file_1=/tmp/62.8PL 00:06:19.619 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:19.619 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:19.619 + tmp_file_2=/tmp/spdk_tgt_config.json.iQT 00:06:19.619 + ret=0 00:06:19.619 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:19.876 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:19.876 + diff -u /tmp/62.8PL /tmp/spdk_tgt_config.json.iQT 00:06:19.876 INFO: JSON config files are the same 00:06:19.876 + echo 'INFO: JSON config files are the same' 00:06:19.876 + rm /tmp/62.8PL /tmp/spdk_tgt_config.json.iQT 00:06:19.876 + exit 0 00:06:19.876 INFO: changing configuration and checking if this can be detected... 00:06:19.876 18:13:31 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:06:19.876 18:13:31 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:19.876 18:13:31 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:19.876 18:13:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:20.442 18:13:32 json_config -- json_config/json_config.sh@391 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:20.442 18:13:32 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:06:20.442 18:13:32 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:20.442 + '[' 2 -ne 2 ']' 00:06:20.442 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:20.442 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:20.442 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:20.442 +++ basename /dev/fd/62 00:06:20.442 ++ mktemp /tmp/62.XXX 00:06:20.442 + tmp_file_1=/tmp/62.lle 00:06:20.442 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:20.442 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:20.442 + tmp_file_2=/tmp/spdk_tgt_config.json.I4R 00:06:20.442 + ret=0 00:06:20.442 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:20.701 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:20.701 + diff -u /tmp/62.lle /tmp/spdk_tgt_config.json.I4R 00:06:20.701 + ret=1 00:06:20.701 + echo '=== Start of file: /tmp/62.lle ===' 00:06:20.701 + cat /tmp/62.lle 00:06:20.701 + echo '=== End of file: /tmp/62.lle ===' 00:06:20.701 + echo '' 00:06:20.701 + echo '=== Start of file: /tmp/spdk_tgt_config.json.I4R ===' 00:06:20.701 + cat /tmp/spdk_tgt_config.json.I4R 00:06:20.701 + echo '=== End of file: /tmp/spdk_tgt_config.json.I4R ===' 00:06:20.701 + echo '' 00:06:20.701 + rm /tmp/62.lle /tmp/spdk_tgt_config.json.I4R 00:06:20.701 + exit 1 00:06:20.701 18:13:32 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:06:20.701 INFO: configuration change detected. 00:06:20.701 18:13:32 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:06:20.701 18:13:32 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:06:20.701 18:13:32 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:20.701 18:13:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:20.701 18:13:32 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:06:20.701 18:13:32 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:06:20.701 18:13:32 json_config -- json_config/json_config.sh@321 -- # [[ -n 60490 ]] 00:06:20.701 18:13:32 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:06:20.701 18:13:32 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:06:20.701 18:13:32 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:20.701 18:13:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:20.701 18:13:32 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:06:20.701 18:13:32 json_config -- json_config/json_config.sh@197 -- # uname -s 00:06:20.701 18:13:32 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:06:20.701 18:13:32 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:06:20.701 18:13:32 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:06:20.701 18:13:32 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:06:20.701 18:13:32 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:20.701 18:13:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:20.701 18:13:32 json_config -- json_config/json_config.sh@327 -- # killprocess 60490 00:06:20.701 18:13:32 json_config -- common/autotest_common.sh@948 -- # '[' -z 60490 ']' 00:06:20.701 18:13:32 json_config -- common/autotest_common.sh@952 -- # kill -0 60490 00:06:20.701 18:13:32 json_config -- common/autotest_common.sh@953 -- # uname 00:06:20.959 18:13:32 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:20.959 18:13:32 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60490 00:06:20.959 killing process with pid 60490 00:06:20.959 18:13:32 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:20.959 18:13:32 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:20.960 18:13:32 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60490' 00:06:20.960 18:13:32 json_config -- common/autotest_common.sh@967 -- # kill 60490 00:06:20.960 18:13:32 json_config -- common/autotest_common.sh@972 -- # wait 60490 00:06:21.894 18:13:33 json_config -- json_config/json_config.sh@330 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:21.894 18:13:33 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:06:21.894 18:13:33 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:21.894 18:13:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:21.895 INFO: Success 00:06:21.895 18:13:33 json_config -- json_config/json_config.sh@332 -- # return 0 00:06:21.895 18:13:33 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:06:21.895 ************************************ 00:06:21.895 END TEST json_config 00:06:21.895 ************************************ 00:06:21.895 00:06:21.895 real 0m11.793s 00:06:21.895 user 0m14.921s 00:06:21.895 sys 0m2.098s 00:06:21.895 18:13:33 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.895 18:13:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:21.895 18:13:33 -- common/autotest_common.sh@1142 -- # return 0 00:06:21.895 18:13:33 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:21.895 18:13:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:21.895 18:13:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.895 18:13:33 -- common/autotest_common.sh@10 -- # set +x 00:06:21.895 ************************************ 00:06:21.895 START TEST json_config_extra_key 00:06:21.895 ************************************ 00:06:21.895 18:13:33 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:21.895 18:13:33 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:21.895 18:13:33 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:21.895 18:13:33 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:21.895 18:13:33 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:21.895 18:13:33 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:21.895 18:13:33 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:21.895 18:13:33 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:21.895 18:13:33 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:21.895 18:13:33 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:21.895 18:13:33 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:21.895 18:13:33 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:21.895 18:13:33 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:21.895 18:13:33 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:06:21.895 18:13:33 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=1e224894-a0fc-4112-b81b-a37606f50c96 00:06:21.895 18:13:33 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:21.895 18:13:33 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:21.895 18:13:33 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:21.895 18:13:33 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:21.895 18:13:33 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:21.895 18:13:33 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:21.895 18:13:33 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:21.895 18:13:33 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:21.895 18:13:33 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.895 18:13:33 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.895 18:13:33 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.154 18:13:33 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:22.154 18:13:33 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.154 18:13:33 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:22.154 18:13:33 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:22.154 18:13:33 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:22.154 18:13:33 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:22.154 18:13:33 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:22.154 18:13:33 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:22.154 18:13:33 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:22.154 18:13:33 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:22.154 18:13:33 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:22.154 18:13:33 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:22.154 18:13:33 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:22.154 18:13:33 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:22.154 18:13:33 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:22.154 18:13:33 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:22.154 18:13:33 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:22.154 18:13:33 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:22.154 18:13:33 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:22.154 18:13:33 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:22.154 18:13:33 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:22.154 INFO: launching applications... 00:06:22.154 18:13:33 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:22.154 18:13:33 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:22.154 18:13:33 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:22.154 18:13:33 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:22.154 18:13:33 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:22.154 18:13:33 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:22.154 18:13:33 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:22.154 18:13:33 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:22.155 18:13:33 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:22.155 Waiting for target to run... 00:06:22.155 18:13:33 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=60653 00:06:22.155 18:13:33 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:22.155 18:13:33 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:22.155 18:13:33 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 60653 /var/tmp/spdk_tgt.sock 00:06:22.155 18:13:33 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 60653 ']' 00:06:22.155 18:13:33 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:22.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:22.155 18:13:33 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:22.155 18:13:33 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:22.155 18:13:33 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:22.155 18:13:33 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:22.155 [2024-07-22 18:13:34.044103] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:22.155 [2024-07-22 18:13:34.044305] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60653 ] 00:06:22.722 [2024-07-22 18:13:34.534688] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.980 [2024-07-22 18:13:34.758519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.980 [2024-07-22 18:13:34.932635] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:23.545 00:06:23.545 INFO: shutting down applications... 00:06:23.545 18:13:35 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:23.545 18:13:35 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:06:23.545 18:13:35 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:23.545 18:13:35 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:23.545 18:13:35 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:23.545 18:13:35 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:23.545 18:13:35 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:23.545 18:13:35 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 60653 ]] 00:06:23.545 18:13:35 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 60653 00:06:23.545 18:13:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:23.545 18:13:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:23.545 18:13:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60653 00:06:23.545 18:13:35 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:24.111 18:13:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:24.111 18:13:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:24.111 18:13:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60653 00:06:24.111 18:13:35 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:24.678 18:13:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:24.678 18:13:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:24.678 18:13:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60653 00:06:24.678 18:13:36 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:24.936 18:13:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:24.936 18:13:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:24.936 18:13:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60653 00:06:24.936 18:13:36 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:25.502 18:13:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:25.502 18:13:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:25.502 18:13:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60653 00:06:25.502 18:13:37 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:26.069 18:13:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:26.069 18:13:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:26.069 18:13:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60653 00:06:26.069 18:13:37 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:26.701 18:13:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:26.702 18:13:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:26.702 18:13:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 60653 00:06:26.702 18:13:38 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:26.702 18:13:38 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:26.702 SPDK target shutdown done 00:06:26.702 Success 00:06:26.702 18:13:38 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:26.702 18:13:38 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:26.702 18:13:38 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:26.702 00:06:26.702 real 0m4.595s 00:06:26.702 user 0m3.962s 00:06:26.702 sys 0m0.647s 00:06:26.702 ************************************ 00:06:26.702 END TEST json_config_extra_key 00:06:26.702 ************************************ 00:06:26.702 18:13:38 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.702 18:13:38 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:26.702 18:13:38 -- common/autotest_common.sh@1142 -- # return 0 00:06:26.702 18:13:38 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:26.702 18:13:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:26.702 18:13:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.702 18:13:38 -- common/autotest_common.sh@10 -- # set +x 00:06:26.702 ************************************ 00:06:26.702 START TEST alias_rpc 00:06:26.702 ************************************ 00:06:26.702 18:13:38 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:26.702 * Looking for test storage... 00:06:26.702 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:26.702 18:13:38 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:26.702 18:13:38 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=60757 00:06:26.702 18:13:38 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 60757 00:06:26.702 18:13:38 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:26.702 18:13:38 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 60757 ']' 00:06:26.702 18:13:38 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.702 18:13:38 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:26.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.702 18:13:38 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.702 18:13:38 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:26.702 18:13:38 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.702 [2024-07-22 18:13:38.691539] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:26.702 [2024-07-22 18:13:38.691725] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60757 ] 00:06:26.960 [2024-07-22 18:13:38.863824] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.219 [2024-07-22 18:13:39.105522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.478 [2024-07-22 18:13:39.306868] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:28.044 18:13:39 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:28.044 18:13:39 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:28.044 18:13:39 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:28.302 18:13:40 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 60757 00:06:28.302 18:13:40 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 60757 ']' 00:06:28.302 18:13:40 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 60757 00:06:28.302 18:13:40 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:06:28.302 18:13:40 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:28.302 18:13:40 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60757 00:06:28.302 killing process with pid 60757 00:06:28.302 18:13:40 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:28.302 18:13:40 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:28.302 18:13:40 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60757' 00:06:28.302 18:13:40 alias_rpc -- common/autotest_common.sh@967 -- # kill 60757 00:06:28.302 18:13:40 alias_rpc -- common/autotest_common.sh@972 -- # wait 60757 00:06:30.872 ************************************ 00:06:30.872 END TEST alias_rpc 00:06:30.872 ************************************ 00:06:30.872 00:06:30.872 real 0m4.047s 00:06:30.872 user 0m4.167s 00:06:30.872 sys 0m0.583s 00:06:30.872 18:13:42 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.872 18:13:42 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.872 18:13:42 -- common/autotest_common.sh@1142 -- # return 0 00:06:30.872 18:13:42 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:30.872 18:13:42 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:30.872 18:13:42 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:30.872 18:13:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.872 18:13:42 -- common/autotest_common.sh@10 -- # set +x 00:06:30.872 ************************************ 00:06:30.872 START TEST spdkcli_tcp 00:06:30.872 ************************************ 00:06:30.872 18:13:42 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:30.872 * Looking for test storage... 00:06:30.872 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:30.872 18:13:42 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:30.872 18:13:42 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:30.872 18:13:42 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:30.872 18:13:42 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:30.872 18:13:42 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:30.872 18:13:42 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:30.872 18:13:42 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:30.872 18:13:42 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:30.872 18:13:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:30.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.872 18:13:42 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=60856 00:06:30.872 18:13:42 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:30.872 18:13:42 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 60856 00:06:30.872 18:13:42 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 60856 ']' 00:06:30.872 18:13:42 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.872 18:13:42 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:30.872 18:13:42 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.872 18:13:42 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:30.872 18:13:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:30.872 [2024-07-22 18:13:42.795613] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:30.872 [2024-07-22 18:13:42.795798] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60856 ] 00:06:31.131 [2024-07-22 18:13:42.971579] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:31.390 [2024-07-22 18:13:43.224845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.390 [2024-07-22 18:13:43.224853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.648 [2024-07-22 18:13:43.436969] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:32.215 18:13:44 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:32.215 18:13:44 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:06:32.215 18:13:44 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=60873 00:06:32.215 18:13:44 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:32.215 18:13:44 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:32.474 [ 00:06:32.474 "bdev_malloc_delete", 00:06:32.474 "bdev_malloc_create", 00:06:32.474 "bdev_null_resize", 00:06:32.474 "bdev_null_delete", 00:06:32.474 "bdev_null_create", 00:06:32.474 "bdev_nvme_cuse_unregister", 00:06:32.474 "bdev_nvme_cuse_register", 00:06:32.474 "bdev_opal_new_user", 00:06:32.474 "bdev_opal_set_lock_state", 00:06:32.474 "bdev_opal_delete", 00:06:32.474 "bdev_opal_get_info", 00:06:32.474 "bdev_opal_create", 00:06:32.474 "bdev_nvme_opal_revert", 00:06:32.474 "bdev_nvme_opal_init", 00:06:32.474 "bdev_nvme_send_cmd", 00:06:32.474 "bdev_nvme_get_path_iostat", 00:06:32.474 "bdev_nvme_get_mdns_discovery_info", 00:06:32.474 "bdev_nvme_stop_mdns_discovery", 00:06:32.474 "bdev_nvme_start_mdns_discovery", 00:06:32.474 "bdev_nvme_set_multipath_policy", 00:06:32.474 "bdev_nvme_set_preferred_path", 00:06:32.474 "bdev_nvme_get_io_paths", 00:06:32.474 "bdev_nvme_remove_error_injection", 00:06:32.474 "bdev_nvme_add_error_injection", 00:06:32.474 "bdev_nvme_get_discovery_info", 00:06:32.474 "bdev_nvme_stop_discovery", 00:06:32.474 "bdev_nvme_start_discovery", 00:06:32.474 "bdev_nvme_get_controller_health_info", 00:06:32.474 "bdev_nvme_disable_controller", 00:06:32.474 "bdev_nvme_enable_controller", 00:06:32.474 "bdev_nvme_reset_controller", 00:06:32.474 "bdev_nvme_get_transport_statistics", 00:06:32.474 "bdev_nvme_apply_firmware", 00:06:32.474 "bdev_nvme_detach_controller", 00:06:32.474 "bdev_nvme_get_controllers", 00:06:32.474 "bdev_nvme_attach_controller", 00:06:32.474 "bdev_nvme_set_hotplug", 00:06:32.474 "bdev_nvme_set_options", 00:06:32.474 "bdev_passthru_delete", 00:06:32.474 "bdev_passthru_create", 00:06:32.474 "bdev_lvol_set_parent_bdev", 00:06:32.474 "bdev_lvol_set_parent", 00:06:32.474 "bdev_lvol_check_shallow_copy", 00:06:32.474 "bdev_lvol_start_shallow_copy", 00:06:32.474 "bdev_lvol_grow_lvstore", 00:06:32.474 "bdev_lvol_get_lvols", 00:06:32.474 "bdev_lvol_get_lvstores", 00:06:32.474 "bdev_lvol_delete", 00:06:32.474 "bdev_lvol_set_read_only", 00:06:32.474 "bdev_lvol_resize", 00:06:32.474 "bdev_lvol_decouple_parent", 00:06:32.474 "bdev_lvol_inflate", 00:06:32.474 "bdev_lvol_rename", 00:06:32.474 "bdev_lvol_clone_bdev", 00:06:32.474 "bdev_lvol_clone", 00:06:32.474 "bdev_lvol_snapshot", 00:06:32.474 "bdev_lvol_create", 00:06:32.474 "bdev_lvol_delete_lvstore", 00:06:32.474 "bdev_lvol_rename_lvstore", 00:06:32.474 "bdev_lvol_create_lvstore", 00:06:32.474 "bdev_raid_set_options", 00:06:32.474 "bdev_raid_remove_base_bdev", 00:06:32.474 "bdev_raid_add_base_bdev", 00:06:32.474 "bdev_raid_delete", 00:06:32.474 "bdev_raid_create", 00:06:32.474 "bdev_raid_get_bdevs", 00:06:32.474 "bdev_error_inject_error", 00:06:32.474 "bdev_error_delete", 00:06:32.474 "bdev_error_create", 00:06:32.474 "bdev_split_delete", 00:06:32.474 "bdev_split_create", 00:06:32.474 "bdev_delay_delete", 00:06:32.474 "bdev_delay_create", 00:06:32.474 "bdev_delay_update_latency", 00:06:32.474 "bdev_zone_block_delete", 00:06:32.474 "bdev_zone_block_create", 00:06:32.474 "blobfs_create", 00:06:32.474 "blobfs_detect", 00:06:32.474 "blobfs_set_cache_size", 00:06:32.474 "bdev_aio_delete", 00:06:32.474 "bdev_aio_rescan", 00:06:32.474 "bdev_aio_create", 00:06:32.474 "bdev_ftl_set_property", 00:06:32.474 "bdev_ftl_get_properties", 00:06:32.474 "bdev_ftl_get_stats", 00:06:32.474 "bdev_ftl_unmap", 00:06:32.474 "bdev_ftl_unload", 00:06:32.474 "bdev_ftl_delete", 00:06:32.474 "bdev_ftl_load", 00:06:32.474 "bdev_ftl_create", 00:06:32.474 "bdev_virtio_attach_controller", 00:06:32.474 "bdev_virtio_scsi_get_devices", 00:06:32.474 "bdev_virtio_detach_controller", 00:06:32.474 "bdev_virtio_blk_set_hotplug", 00:06:32.474 "bdev_iscsi_delete", 00:06:32.474 "bdev_iscsi_create", 00:06:32.474 "bdev_iscsi_set_options", 00:06:32.474 "bdev_uring_delete", 00:06:32.474 "bdev_uring_rescan", 00:06:32.474 "bdev_uring_create", 00:06:32.474 "accel_error_inject_error", 00:06:32.474 "ioat_scan_accel_module", 00:06:32.474 "dsa_scan_accel_module", 00:06:32.474 "iaa_scan_accel_module", 00:06:32.474 "vfu_virtio_create_scsi_endpoint", 00:06:32.474 "vfu_virtio_scsi_remove_target", 00:06:32.474 "vfu_virtio_scsi_add_target", 00:06:32.474 "vfu_virtio_create_blk_endpoint", 00:06:32.474 "vfu_virtio_delete_endpoint", 00:06:32.474 "keyring_file_remove_key", 00:06:32.474 "keyring_file_add_key", 00:06:32.474 "keyring_linux_set_options", 00:06:32.474 "iscsi_get_histogram", 00:06:32.474 "iscsi_enable_histogram", 00:06:32.474 "iscsi_set_options", 00:06:32.474 "iscsi_get_auth_groups", 00:06:32.474 "iscsi_auth_group_remove_secret", 00:06:32.474 "iscsi_auth_group_add_secret", 00:06:32.474 "iscsi_delete_auth_group", 00:06:32.474 "iscsi_create_auth_group", 00:06:32.474 "iscsi_set_discovery_auth", 00:06:32.474 "iscsi_get_options", 00:06:32.475 "iscsi_target_node_request_logout", 00:06:32.475 "iscsi_target_node_set_redirect", 00:06:32.475 "iscsi_target_node_set_auth", 00:06:32.475 "iscsi_target_node_add_lun", 00:06:32.475 "iscsi_get_stats", 00:06:32.475 "iscsi_get_connections", 00:06:32.475 "iscsi_portal_group_set_auth", 00:06:32.475 "iscsi_start_portal_group", 00:06:32.475 "iscsi_delete_portal_group", 00:06:32.475 "iscsi_create_portal_group", 00:06:32.475 "iscsi_get_portal_groups", 00:06:32.475 "iscsi_delete_target_node", 00:06:32.475 "iscsi_target_node_remove_pg_ig_maps", 00:06:32.475 "iscsi_target_node_add_pg_ig_maps", 00:06:32.475 "iscsi_create_target_node", 00:06:32.475 "iscsi_get_target_nodes", 00:06:32.475 "iscsi_delete_initiator_group", 00:06:32.475 "iscsi_initiator_group_remove_initiators", 00:06:32.475 "iscsi_initiator_group_add_initiators", 00:06:32.475 "iscsi_create_initiator_group", 00:06:32.475 "iscsi_get_initiator_groups", 00:06:32.475 "nvmf_set_crdt", 00:06:32.475 "nvmf_set_config", 00:06:32.475 "nvmf_set_max_subsystems", 00:06:32.475 "nvmf_stop_mdns_prr", 00:06:32.475 "nvmf_publish_mdns_prr", 00:06:32.475 "nvmf_subsystem_get_listeners", 00:06:32.475 "nvmf_subsystem_get_qpairs", 00:06:32.475 "nvmf_subsystem_get_controllers", 00:06:32.475 "nvmf_get_stats", 00:06:32.475 "nvmf_get_transports", 00:06:32.475 "nvmf_create_transport", 00:06:32.475 "nvmf_get_targets", 00:06:32.475 "nvmf_delete_target", 00:06:32.475 "nvmf_create_target", 00:06:32.475 "nvmf_subsystem_allow_any_host", 00:06:32.475 "nvmf_subsystem_remove_host", 00:06:32.475 "nvmf_subsystem_add_host", 00:06:32.475 "nvmf_ns_remove_host", 00:06:32.475 "nvmf_ns_add_host", 00:06:32.475 "nvmf_subsystem_remove_ns", 00:06:32.475 "nvmf_subsystem_add_ns", 00:06:32.475 "nvmf_subsystem_listener_set_ana_state", 00:06:32.475 "nvmf_discovery_get_referrals", 00:06:32.475 "nvmf_discovery_remove_referral", 00:06:32.475 "nvmf_discovery_add_referral", 00:06:32.475 "nvmf_subsystem_remove_listener", 00:06:32.475 "nvmf_subsystem_add_listener", 00:06:32.475 "nvmf_delete_subsystem", 00:06:32.475 "nvmf_create_subsystem", 00:06:32.475 "nvmf_get_subsystems", 00:06:32.475 "env_dpdk_get_mem_stats", 00:06:32.475 "nbd_get_disks", 00:06:32.475 "nbd_stop_disk", 00:06:32.475 "nbd_start_disk", 00:06:32.475 "ublk_recover_disk", 00:06:32.475 "ublk_get_disks", 00:06:32.475 "ublk_stop_disk", 00:06:32.475 "ublk_start_disk", 00:06:32.475 "ublk_destroy_target", 00:06:32.475 "ublk_create_target", 00:06:32.475 "virtio_blk_create_transport", 00:06:32.475 "virtio_blk_get_transports", 00:06:32.475 "vhost_controller_set_coalescing", 00:06:32.475 "vhost_get_controllers", 00:06:32.475 "vhost_delete_controller", 00:06:32.475 "vhost_create_blk_controller", 00:06:32.475 "vhost_scsi_controller_remove_target", 00:06:32.475 "vhost_scsi_controller_add_target", 00:06:32.475 "vhost_start_scsi_controller", 00:06:32.475 "vhost_create_scsi_controller", 00:06:32.475 "thread_set_cpumask", 00:06:32.475 "framework_get_governor", 00:06:32.475 "framework_get_scheduler", 00:06:32.475 "framework_set_scheduler", 00:06:32.475 "framework_get_reactors", 00:06:32.475 "thread_get_io_channels", 00:06:32.475 "thread_get_pollers", 00:06:32.475 "thread_get_stats", 00:06:32.475 "framework_monitor_context_switch", 00:06:32.475 "spdk_kill_instance", 00:06:32.475 "log_enable_timestamps", 00:06:32.475 "log_get_flags", 00:06:32.475 "log_clear_flag", 00:06:32.475 "log_set_flag", 00:06:32.475 "log_get_level", 00:06:32.475 "log_set_level", 00:06:32.475 "log_get_print_level", 00:06:32.475 "log_set_print_level", 00:06:32.475 "framework_enable_cpumask_locks", 00:06:32.475 "framework_disable_cpumask_locks", 00:06:32.475 "framework_wait_init", 00:06:32.475 "framework_start_init", 00:06:32.475 "scsi_get_devices", 00:06:32.475 "bdev_get_histogram", 00:06:32.475 "bdev_enable_histogram", 00:06:32.475 "bdev_set_qos_limit", 00:06:32.475 "bdev_set_qd_sampling_period", 00:06:32.475 "bdev_get_bdevs", 00:06:32.475 "bdev_reset_iostat", 00:06:32.475 "bdev_get_iostat", 00:06:32.475 "bdev_examine", 00:06:32.475 "bdev_wait_for_examine", 00:06:32.475 "bdev_set_options", 00:06:32.475 "notify_get_notifications", 00:06:32.475 "notify_get_types", 00:06:32.475 "accel_get_stats", 00:06:32.475 "accel_set_options", 00:06:32.475 "accel_set_driver", 00:06:32.475 "accel_crypto_key_destroy", 00:06:32.475 "accel_crypto_keys_get", 00:06:32.475 "accel_crypto_key_create", 00:06:32.475 "accel_assign_opc", 00:06:32.475 "accel_get_module_info", 00:06:32.475 "accel_get_opc_assignments", 00:06:32.475 "vmd_rescan", 00:06:32.475 "vmd_remove_device", 00:06:32.475 "vmd_enable", 00:06:32.475 "sock_get_default_impl", 00:06:32.475 "sock_set_default_impl", 00:06:32.475 "sock_impl_set_options", 00:06:32.475 "sock_impl_get_options", 00:06:32.475 "iobuf_get_stats", 00:06:32.475 "iobuf_set_options", 00:06:32.475 "keyring_get_keys", 00:06:32.475 "framework_get_pci_devices", 00:06:32.475 "framework_get_config", 00:06:32.475 "framework_get_subsystems", 00:06:32.475 "vfu_tgt_set_base_path", 00:06:32.475 "trace_get_info", 00:06:32.475 "trace_get_tpoint_group_mask", 00:06:32.475 "trace_disable_tpoint_group", 00:06:32.475 "trace_enable_tpoint_group", 00:06:32.475 "trace_clear_tpoint_mask", 00:06:32.475 "trace_set_tpoint_mask", 00:06:32.475 "spdk_get_version", 00:06:32.475 "rpc_get_methods" 00:06:32.475 ] 00:06:32.475 18:13:44 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:32.475 18:13:44 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:32.475 18:13:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:32.475 18:13:44 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:32.475 18:13:44 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 60856 00:06:32.475 18:13:44 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 60856 ']' 00:06:32.475 18:13:44 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 60856 00:06:32.475 18:13:44 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:06:32.475 18:13:44 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:32.475 18:13:44 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60856 00:06:32.475 killing process with pid 60856 00:06:32.475 18:13:44 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:32.475 18:13:44 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:32.475 18:13:44 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60856' 00:06:32.475 18:13:44 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 60856 00:06:32.475 18:13:44 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 60856 00:06:35.006 ************************************ 00:06:35.006 END TEST spdkcli_tcp 00:06:35.006 ************************************ 00:06:35.006 00:06:35.006 real 0m4.125s 00:06:35.006 user 0m7.198s 00:06:35.006 sys 0m0.621s 00:06:35.006 18:13:46 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.006 18:13:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:35.006 18:13:46 -- common/autotest_common.sh@1142 -- # return 0 00:06:35.006 18:13:46 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:35.006 18:13:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:35.006 18:13:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.006 18:13:46 -- common/autotest_common.sh@10 -- # set +x 00:06:35.006 ************************************ 00:06:35.006 START TEST dpdk_mem_utility 00:06:35.006 ************************************ 00:06:35.006 18:13:46 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:35.006 * Looking for test storage... 00:06:35.006 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:35.006 18:13:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:35.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.006 18:13:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=60970 00:06:35.006 18:13:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 60970 00:06:35.006 18:13:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:35.006 18:13:46 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 60970 ']' 00:06:35.006 18:13:46 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.006 18:13:46 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:35.006 18:13:46 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.006 18:13:46 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:35.006 18:13:46 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:35.006 [2024-07-22 18:13:46.975041] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:35.006 [2024-07-22 18:13:46.976125] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60970 ] 00:06:35.264 [2024-07-22 18:13:47.156479] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.523 [2024-07-22 18:13:47.447080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.781 [2024-07-22 18:13:47.664837] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:36.348 18:13:48 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:36.348 18:13:48 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:06:36.348 18:13:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:36.348 18:13:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:36.348 18:13:48 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.348 18:13:48 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:36.348 { 00:06:36.348 "filename": "/tmp/spdk_mem_dump.txt" 00:06:36.348 } 00:06:36.349 18:13:48 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.349 18:13:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:36.609 DPDK memory size 820.000000 MiB in 1 heap(s) 00:06:36.609 1 heaps totaling size 820.000000 MiB 00:06:36.609 size: 820.000000 MiB heap id: 0 00:06:36.609 end heaps---------- 00:06:36.609 8 mempools totaling size 598.116089 MiB 00:06:36.609 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:36.609 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:36.609 size: 84.521057 MiB name: bdev_io_60970 00:06:36.609 size: 51.011292 MiB name: evtpool_60970 00:06:36.609 size: 50.003479 MiB name: msgpool_60970 00:06:36.609 size: 21.763794 MiB name: PDU_Pool 00:06:36.609 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:36.610 size: 0.026123 MiB name: Session_Pool 00:06:36.610 end mempools------- 00:06:36.610 6 memzones totaling size 4.142822 MiB 00:06:36.610 size: 1.000366 MiB name: RG_ring_0_60970 00:06:36.610 size: 1.000366 MiB name: RG_ring_1_60970 00:06:36.610 size: 1.000366 MiB name: RG_ring_4_60970 00:06:36.610 size: 1.000366 MiB name: RG_ring_5_60970 00:06:36.610 size: 0.125366 MiB name: RG_ring_2_60970 00:06:36.610 size: 0.015991 MiB name: RG_ring_3_60970 00:06:36.610 end memzones------- 00:06:36.610 18:13:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:36.610 heap id: 0 total size: 820.000000 MiB number of busy elements: 296 number of free elements: 18 00:06:36.610 list of free elements. size: 18.452515 MiB 00:06:36.610 element at address: 0x200000400000 with size: 1.999451 MiB 00:06:36.610 element at address: 0x200000800000 with size: 1.996887 MiB 00:06:36.610 element at address: 0x200007000000 with size: 1.995972 MiB 00:06:36.610 element at address: 0x20000b200000 with size: 1.995972 MiB 00:06:36.610 element at address: 0x200019100040 with size: 0.999939 MiB 00:06:36.610 element at address: 0x200019500040 with size: 0.999939 MiB 00:06:36.610 element at address: 0x200019600000 with size: 0.999084 MiB 00:06:36.610 element at address: 0x200003e00000 with size: 0.996094 MiB 00:06:36.610 element at address: 0x200032200000 with size: 0.994324 MiB 00:06:36.610 element at address: 0x200018e00000 with size: 0.959656 MiB 00:06:36.610 element at address: 0x200019900040 with size: 0.936401 MiB 00:06:36.610 element at address: 0x200000200000 with size: 0.830200 MiB 00:06:36.610 element at address: 0x20001b000000 with size: 0.565125 MiB 00:06:36.610 element at address: 0x200019200000 with size: 0.487976 MiB 00:06:36.610 element at address: 0x200019a00000 with size: 0.485413 MiB 00:06:36.610 element at address: 0x200013800000 with size: 0.467651 MiB 00:06:36.610 element at address: 0x200028400000 with size: 0.390442 MiB 00:06:36.610 element at address: 0x200003a00000 with size: 0.351990 MiB 00:06:36.610 list of standard malloc elements. size: 199.283081 MiB 00:06:36.610 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:06:36.610 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:06:36.610 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:06:36.610 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:06:36.610 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:06:36.610 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:06:36.610 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:06:36.610 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:06:36.610 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:06:36.610 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:06:36.610 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:06:36.610 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:06:36.610 element at address: 0x200003a5a1c0 with size: 0.000244 MiB 00:06:36.610 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:06:36.610 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:06:36.610 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:06:36.610 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:06:36.610 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:06:36.610 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:06:36.610 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:06:36.610 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:06:36.610 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:06:36.610 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:06:36.610 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:06:36.610 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:06:36.610 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:06:36.610 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:06:36.610 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:06:36.610 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:06:36.610 element at address: 0x200003aff980 with size: 0.000244 MiB 00:06:36.610 element at address: 0x200003affa80 with size: 0.000244 MiB 00:06:36.610 element at address: 0x200003eff000 with size: 0.000244 MiB 00:06:36.610 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:06:36.610 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:06:36.610 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:06:36.610 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:06:36.610 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:06:36.610 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:06:36.610 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:06:36.610 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:06:36.610 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:06:36.610 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:06:36.610 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:06:36.610 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:06:36.610 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:06:36.610 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:06:36.610 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:06:36.610 element at address: 0x200013877b80 with size: 0.000244 MiB 00:06:36.610 element at address: 0x200013877c80 with size: 0.000244 MiB 00:06:36.610 element at address: 0x200013877d80 with size: 0.000244 MiB 00:06:36.610 element at address: 0x200013877e80 with size: 0.000244 MiB 00:06:36.610 element at address: 0x200013877f80 with size: 0.000244 MiB 00:06:36.610 element at address: 0x200013878080 with size: 0.000244 MiB 00:06:36.610 element at address: 0x200013878180 with size: 0.000244 MiB 00:06:36.610 element at address: 0x200013878280 with size: 0.000244 MiB 00:06:36.610 element at address: 0x200013878380 with size: 0.000244 MiB 00:06:36.610 element at address: 0x200013878480 with size: 0.000244 MiB 00:06:36.610 element at address: 0x200013878580 with size: 0.000244 MiB 00:06:36.611 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:06:36.611 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:06:36.611 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x200019abc680 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:06:36.611 element at address: 0x200028463f40 with size: 0.000244 MiB 00:06:36.611 element at address: 0x200028464040 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20002846ad00 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20002846af80 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20002846b080 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20002846b180 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20002846b280 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20002846b380 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20002846b480 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20002846b580 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20002846b680 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20002846b780 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20002846b880 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20002846b980 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20002846be80 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20002846c080 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20002846c180 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20002846c280 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20002846c380 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20002846c480 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20002846c580 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20002846c680 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20002846c780 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20002846c880 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20002846c980 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20002846d080 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20002846d180 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20002846d280 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20002846d380 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20002846d480 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20002846d580 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20002846d680 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20002846d780 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20002846d880 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20002846d980 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20002846da80 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20002846db80 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20002846de80 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20002846df80 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20002846e080 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20002846e180 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20002846e280 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20002846e380 with size: 0.000244 MiB 00:06:36.611 element at address: 0x20002846e480 with size: 0.000244 MiB 00:06:36.612 element at address: 0x20002846e580 with size: 0.000244 MiB 00:06:36.612 element at address: 0x20002846e680 with size: 0.000244 MiB 00:06:36.612 element at address: 0x20002846e780 with size: 0.000244 MiB 00:06:36.612 element at address: 0x20002846e880 with size: 0.000244 MiB 00:06:36.612 element at address: 0x20002846e980 with size: 0.000244 MiB 00:06:36.612 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:06:36.612 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:06:36.612 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:06:36.612 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:06:36.612 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:06:36.612 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:06:36.612 element at address: 0x20002846f080 with size: 0.000244 MiB 00:06:36.612 element at address: 0x20002846f180 with size: 0.000244 MiB 00:06:36.612 element at address: 0x20002846f280 with size: 0.000244 MiB 00:06:36.612 element at address: 0x20002846f380 with size: 0.000244 MiB 00:06:36.612 element at address: 0x20002846f480 with size: 0.000244 MiB 00:06:36.612 element at address: 0x20002846f580 with size: 0.000244 MiB 00:06:36.612 element at address: 0x20002846f680 with size: 0.000244 MiB 00:06:36.612 element at address: 0x20002846f780 with size: 0.000244 MiB 00:06:36.612 element at address: 0x20002846f880 with size: 0.000244 MiB 00:06:36.612 element at address: 0x20002846f980 with size: 0.000244 MiB 00:06:36.612 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:06:36.612 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:06:36.612 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:06:36.612 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:06:36.612 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:06:36.612 list of memzone associated elements. size: 602.264404 MiB 00:06:36.612 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:06:36.612 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:36.612 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:06:36.612 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:36.612 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:06:36.612 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_60970_0 00:06:36.612 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:06:36.612 associated memzone info: size: 48.002930 MiB name: MP_evtpool_60970_0 00:06:36.612 element at address: 0x200003fff340 with size: 48.003113 MiB 00:06:36.612 associated memzone info: size: 48.002930 MiB name: MP_msgpool_60970_0 00:06:36.612 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:06:36.612 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:36.612 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:06:36.612 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:36.612 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:06:36.612 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_60970 00:06:36.612 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:06:36.612 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_60970 00:06:36.612 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:06:36.612 associated memzone info: size: 1.007996 MiB name: MP_evtpool_60970 00:06:36.612 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:06:36.612 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:36.612 element at address: 0x200019abc780 with size: 1.008179 MiB 00:06:36.612 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:36.612 element at address: 0x200018efde00 with size: 1.008179 MiB 00:06:36.612 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:36.612 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:06:36.612 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:36.612 element at address: 0x200003eff100 with size: 1.000549 MiB 00:06:36.612 associated memzone info: size: 1.000366 MiB name: RG_ring_0_60970 00:06:36.612 element at address: 0x200003affb80 with size: 1.000549 MiB 00:06:36.612 associated memzone info: size: 1.000366 MiB name: RG_ring_1_60970 00:06:36.612 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:06:36.612 associated memzone info: size: 1.000366 MiB name: RG_ring_4_60970 00:06:36.612 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:06:36.612 associated memzone info: size: 1.000366 MiB name: RG_ring_5_60970 00:06:36.612 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:06:36.612 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_60970 00:06:36.612 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:06:36.612 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:36.612 element at address: 0x200013878680 with size: 0.500549 MiB 00:06:36.612 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:36.612 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:06:36.612 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:36.612 element at address: 0x200003adf740 with size: 0.125549 MiB 00:06:36.612 associated memzone info: size: 0.125366 MiB name: RG_ring_2_60970 00:06:36.612 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:06:36.612 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:36.612 element at address: 0x200028464140 with size: 0.023804 MiB 00:06:36.612 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:36.612 element at address: 0x200003adb500 with size: 0.016174 MiB 00:06:36.612 associated memzone info: size: 0.015991 MiB name: RG_ring_3_60970 00:06:36.612 element at address: 0x20002846a2c0 with size: 0.002502 MiB 00:06:36.612 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:36.612 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:06:36.612 associated memzone info: size: 0.000183 MiB name: MP_msgpool_60970 00:06:36.612 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:06:36.612 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_60970 00:06:36.612 element at address: 0x20002846ae00 with size: 0.000366 MiB 00:06:36.612 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:36.612 18:13:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:36.612 18:13:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 60970 00:06:36.612 18:13:48 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 60970 ']' 00:06:36.612 18:13:48 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 60970 00:06:36.612 18:13:48 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:06:36.612 18:13:48 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:36.612 18:13:48 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60970 00:06:36.612 killing process with pid 60970 00:06:36.612 18:13:48 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:36.612 18:13:48 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:36.612 18:13:48 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60970' 00:06:36.612 18:13:48 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 60970 00:06:36.612 18:13:48 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 60970 00:06:39.144 00:06:39.144 real 0m4.140s 00:06:39.144 user 0m4.078s 00:06:39.144 sys 0m0.636s 00:06:39.144 18:13:50 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.145 ************************************ 00:06:39.145 END TEST dpdk_mem_utility 00:06:39.145 ************************************ 00:06:39.145 18:13:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:39.145 18:13:50 -- common/autotest_common.sh@1142 -- # return 0 00:06:39.145 18:13:50 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:39.145 18:13:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:39.145 18:13:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.145 18:13:50 -- common/autotest_common.sh@10 -- # set +x 00:06:39.145 ************************************ 00:06:39.145 START TEST event 00:06:39.145 ************************************ 00:06:39.145 18:13:50 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:39.145 * Looking for test storage... 00:06:39.145 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:39.145 18:13:51 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:39.145 18:13:51 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:39.145 18:13:51 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:39.145 18:13:51 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:39.145 18:13:51 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.145 18:13:51 event -- common/autotest_common.sh@10 -- # set +x 00:06:39.145 ************************************ 00:06:39.145 START TEST event_perf 00:06:39.145 ************************************ 00:06:39.145 18:13:51 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:39.145 Running I/O for 1 seconds...[2024-07-22 18:13:51.092576] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:39.145 [2024-07-22 18:13:51.092874] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61070 ] 00:06:39.403 [2024-07-22 18:13:51.270447] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:39.661 [2024-07-22 18:13:51.562977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.661 [2024-07-22 18:13:51.563133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.661 Running I/O for 1 seconds...[2024-07-22 18:13:51.563272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.661 [2024-07-22 18:13:51.563300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:41.035 00:06:41.035 lcore 0: 185264 00:06:41.035 lcore 1: 185265 00:06:41.035 lcore 2: 185265 00:06:41.035 lcore 3: 185264 00:06:41.035 done. 00:06:41.035 00:06:41.035 real 0m1.944s 00:06:41.035 user 0m4.686s 00:06:41.035 sys 0m0.124s 00:06:41.035 18:13:52 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.035 18:13:52 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:41.035 ************************************ 00:06:41.035 END TEST event_perf 00:06:41.035 ************************************ 00:06:41.035 18:13:53 event -- common/autotest_common.sh@1142 -- # return 0 00:06:41.035 18:13:53 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:41.035 18:13:53 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:41.035 18:13:53 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.035 18:13:53 event -- common/autotest_common.sh@10 -- # set +x 00:06:41.035 ************************************ 00:06:41.035 START TEST event_reactor 00:06:41.035 ************************************ 00:06:41.035 18:13:53 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:41.294 [2024-07-22 18:13:53.100679] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:41.294 [2024-07-22 18:13:53.100926] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61115 ] 00:06:41.294 [2024-07-22 18:13:53.289739] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.553 [2024-07-22 18:13:53.542823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.453 test_start 00:06:43.453 oneshot 00:06:43.453 tick 100 00:06:43.453 tick 100 00:06:43.453 tick 250 00:06:43.453 tick 100 00:06:43.453 tick 100 00:06:43.453 tick 250 00:06:43.453 tick 500 00:06:43.453 tick 100 00:06:43.453 tick 100 00:06:43.453 tick 100 00:06:43.453 tick 250 00:06:43.453 tick 100 00:06:43.453 tick 100 00:06:43.453 test_end 00:06:43.453 00:06:43.453 real 0m1.927s 00:06:43.453 user 0m1.664s 00:06:43.453 sys 0m0.149s 00:06:43.453 18:13:54 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.453 18:13:54 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:43.453 ************************************ 00:06:43.453 END TEST event_reactor 00:06:43.453 ************************************ 00:06:43.453 18:13:55 event -- common/autotest_common.sh@1142 -- # return 0 00:06:43.453 18:13:55 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:43.454 18:13:55 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:43.454 18:13:55 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.454 18:13:55 event -- common/autotest_common.sh@10 -- # set +x 00:06:43.454 ************************************ 00:06:43.454 START TEST event_reactor_perf 00:06:43.454 ************************************ 00:06:43.454 18:13:55 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:43.454 [2024-07-22 18:13:55.077063] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:43.454 [2024-07-22 18:13:55.077277] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61157 ] 00:06:43.454 [2024-07-22 18:13:55.249694] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.712 [2024-07-22 18:13:55.494530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.086 test_start 00:06:45.086 test_end 00:06:45.086 Performance: 257977 events per second 00:06:45.086 00:06:45.086 real 0m1.888s 00:06:45.086 user 0m1.659s 00:06:45.086 sys 0m0.116s 00:06:45.086 ************************************ 00:06:45.086 END TEST event_reactor_perf 00:06:45.086 ************************************ 00:06:45.086 18:13:56 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.086 18:13:56 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:45.086 18:13:56 event -- common/autotest_common.sh@1142 -- # return 0 00:06:45.086 18:13:56 event -- event/event.sh@49 -- # uname -s 00:06:45.086 18:13:56 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:45.086 18:13:56 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:45.086 18:13:56 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:45.086 18:13:56 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.086 18:13:56 event -- common/autotest_common.sh@10 -- # set +x 00:06:45.086 ************************************ 00:06:45.086 START TEST event_scheduler 00:06:45.086 ************************************ 00:06:45.086 18:13:56 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:45.086 * Looking for test storage... 00:06:45.086 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:45.086 18:13:57 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:45.086 18:13:57 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=61225 00:06:45.086 18:13:57 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:45.086 18:13:57 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:45.086 18:13:57 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 61225 00:06:45.086 18:13:57 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 61225 ']' 00:06:45.086 18:13:57 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.086 18:13:57 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:45.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.086 18:13:57 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.086 18:13:57 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:45.086 18:13:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:45.346 [2024-07-22 18:13:57.173696] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:45.346 [2024-07-22 18:13:57.173897] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61225 ] 00:06:45.346 [2024-07-22 18:13:57.347990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:45.913 [2024-07-22 18:13:57.629003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.913 [2024-07-22 18:13:57.629156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.913 [2024-07-22 18:13:57.630156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:45.913 [2024-07-22 18:13:57.630166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:46.171 18:13:58 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:46.171 18:13:58 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:06:46.171 18:13:58 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:46.171 18:13:58 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.171 18:13:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:46.171 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:46.171 POWER: Cannot set governor of lcore 0 to userspace 00:06:46.171 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:46.171 POWER: Cannot set governor of lcore 0 to performance 00:06:46.171 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:46.171 POWER: Cannot set governor of lcore 0 to userspace 00:06:46.171 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:46.171 POWER: Cannot set governor of lcore 0 to userspace 00:06:46.171 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:46.171 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:46.171 POWER: Unable to set Power Management Environment for lcore 0 00:06:46.171 [2024-07-22 18:13:58.108587] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:46.171 [2024-07-22 18:13:58.108609] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:46.171 [2024-07-22 18:13:58.108627] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:46.171 [2024-07-22 18:13:58.108731] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:46.171 [2024-07-22 18:13:58.108752] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:46.171 [2024-07-22 18:13:58.108765] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:46.171 18:13:58 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.171 18:13:58 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:46.171 18:13:58 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.171 18:13:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:46.429 [2024-07-22 18:13:58.329734] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:46.429 [2024-07-22 18:13:58.440007] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:46.429 18:13:58 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.429 18:13:58 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:46.429 18:13:58 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:46.429 18:13:58 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.429 18:13:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:46.688 ************************************ 00:06:46.688 START TEST scheduler_create_thread 00:06:46.688 ************************************ 00:06:46.688 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:06:46.688 18:13:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:46.688 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.688 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.688 2 00:06:46.688 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.688 18:13:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:46.688 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.688 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.688 3 00:06:46.688 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.688 18:13:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:46.688 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.688 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.688 4 00:06:46.688 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.688 18:13:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:46.688 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.688 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.688 5 00:06:46.688 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.688 18:13:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:46.688 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.688 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.688 6 00:06:46.688 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.688 18:13:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:46.688 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.688 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.688 7 00:06:46.688 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.688 18:13:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:46.688 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.688 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.688 8 00:06:46.688 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.688 18:13:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:46.688 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.688 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.688 9 00:06:46.688 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.688 18:13:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:46.688 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.688 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.688 10 00:06:46.688 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.688 18:13:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:46.688 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.688 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.688 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.688 18:13:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:46.688 18:13:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:46.688 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.688 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.688 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.688 18:13:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:46.688 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.688 18:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.635 18:13:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.635 18:13:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:47.635 18:13:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:47.635 18:13:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.635 18:13:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:49.009 ************************************ 00:06:49.009 END TEST scheduler_create_thread 00:06:49.009 ************************************ 00:06:49.009 18:14:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.009 00:06:49.009 real 0m2.137s 00:06:49.009 user 0m0.017s 00:06:49.009 sys 0m0.007s 00:06:49.009 18:14:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.009 18:14:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:49.009 18:14:00 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:06:49.009 18:14:00 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:49.009 18:14:00 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 61225 00:06:49.009 18:14:00 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 61225 ']' 00:06:49.009 18:14:00 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 61225 00:06:49.009 18:14:00 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:49.009 18:14:00 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:49.009 18:14:00 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61225 00:06:49.009 killing process with pid 61225 00:06:49.009 18:14:00 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:49.009 18:14:00 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:49.009 18:14:00 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61225' 00:06:49.009 18:14:00 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 61225 00:06:49.009 18:14:00 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 61225 00:06:49.266 [2024-07-22 18:14:01.070192] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:50.642 00:06:50.642 real 0m5.295s 00:06:50.642 user 0m8.460s 00:06:50.642 sys 0m0.505s 00:06:50.642 18:14:02 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.642 18:14:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:50.642 ************************************ 00:06:50.642 END TEST event_scheduler 00:06:50.642 ************************************ 00:06:50.642 18:14:02 event -- common/autotest_common.sh@1142 -- # return 0 00:06:50.642 18:14:02 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:50.642 18:14:02 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:50.642 18:14:02 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:50.642 18:14:02 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.642 18:14:02 event -- common/autotest_common.sh@10 -- # set +x 00:06:50.642 ************************************ 00:06:50.642 START TEST app_repeat 00:06:50.642 ************************************ 00:06:50.642 18:14:02 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:50.642 18:14:02 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.642 18:14:02 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.642 18:14:02 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:50.642 18:14:02 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:50.642 18:14:02 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:50.642 18:14:02 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:50.642 18:14:02 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:50.642 18:14:02 event.app_repeat -- event/event.sh@19 -- # repeat_pid=61331 00:06:50.642 18:14:02 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:50.642 18:14:02 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:50.642 Process app_repeat pid: 61331 00:06:50.642 18:14:02 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 61331' 00:06:50.642 18:14:02 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:50.642 18:14:02 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:50.642 spdk_app_start Round 0 00:06:50.642 18:14:02 event.app_repeat -- event/event.sh@25 -- # waitforlisten 61331 /var/tmp/spdk-nbd.sock 00:06:50.642 18:14:02 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 61331 ']' 00:06:50.642 18:14:02 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:50.642 18:14:02 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:50.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:50.642 18:14:02 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:50.642 18:14:02 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:50.642 18:14:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:50.642 [2024-07-22 18:14:02.390918] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:50.642 [2024-07-22 18:14:02.391062] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61331 ] 00:06:50.642 [2024-07-22 18:14:02.549929] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:50.900 [2024-07-22 18:14:02.789760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.900 [2024-07-22 18:14:02.789769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.158 [2024-07-22 18:14:03.004805] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:51.416 18:14:03 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:51.416 18:14:03 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:51.416 18:14:03 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:51.673 Malloc0 00:06:51.930 18:14:03 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:52.188 Malloc1 00:06:52.188 18:14:03 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:52.188 18:14:03 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.188 18:14:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:52.188 18:14:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:52.188 18:14:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.188 18:14:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:52.188 18:14:03 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:52.188 18:14:03 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.188 18:14:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:52.188 18:14:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:52.188 18:14:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.188 18:14:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:52.188 18:14:03 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:52.188 18:14:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:52.188 18:14:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:52.188 18:14:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:52.188 /dev/nbd0 00:06:52.445 18:14:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:52.445 18:14:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:52.445 18:14:04 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:52.445 18:14:04 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:52.445 18:14:04 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:52.445 18:14:04 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:52.445 18:14:04 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:52.445 18:14:04 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:52.445 18:14:04 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:52.445 18:14:04 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:52.445 18:14:04 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:52.445 1+0 records in 00:06:52.445 1+0 records out 00:06:52.445 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00017738 s, 23.1 MB/s 00:06:52.445 18:14:04 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:52.445 18:14:04 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:52.445 18:14:04 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:52.445 18:14:04 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:52.445 18:14:04 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:52.445 18:14:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:52.445 18:14:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:52.445 18:14:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:52.704 /dev/nbd1 00:06:52.704 18:14:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:52.704 18:14:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:52.704 18:14:04 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:52.704 18:14:04 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:52.704 18:14:04 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:52.704 18:14:04 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:52.704 18:14:04 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:52.704 18:14:04 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:52.704 18:14:04 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:52.704 18:14:04 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:52.704 18:14:04 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:52.704 1+0 records in 00:06:52.704 1+0 records out 00:06:52.704 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230738 s, 17.8 MB/s 00:06:52.704 18:14:04 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:52.704 18:14:04 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:52.704 18:14:04 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:52.704 18:14:04 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:52.704 18:14:04 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:52.704 18:14:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:52.704 18:14:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:52.704 18:14:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:52.704 18:14:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.704 18:14:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:52.963 18:14:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:52.963 { 00:06:52.963 "nbd_device": "/dev/nbd0", 00:06:52.963 "bdev_name": "Malloc0" 00:06:52.963 }, 00:06:52.963 { 00:06:52.963 "nbd_device": "/dev/nbd1", 00:06:52.963 "bdev_name": "Malloc1" 00:06:52.963 } 00:06:52.963 ]' 00:06:52.963 18:14:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:52.963 { 00:06:52.963 "nbd_device": "/dev/nbd0", 00:06:52.963 "bdev_name": "Malloc0" 00:06:52.963 }, 00:06:52.963 { 00:06:52.963 "nbd_device": "/dev/nbd1", 00:06:52.963 "bdev_name": "Malloc1" 00:06:52.963 } 00:06:52.963 ]' 00:06:52.963 18:14:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:52.963 18:14:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:52.963 /dev/nbd1' 00:06:52.963 18:14:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:52.963 /dev/nbd1' 00:06:52.963 18:14:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:52.963 18:14:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:52.963 18:14:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:52.963 18:14:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:52.963 18:14:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:52.963 18:14:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:52.963 18:14:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.963 18:14:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:52.963 18:14:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:52.963 18:14:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:52.963 18:14:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:52.963 18:14:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:52.963 256+0 records in 00:06:52.963 256+0 records out 00:06:52.963 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00467216 s, 224 MB/s 00:06:52.963 18:14:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:52.963 18:14:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:52.963 256+0 records in 00:06:52.963 256+0 records out 00:06:52.963 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0256682 s, 40.9 MB/s 00:06:52.963 18:14:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:52.963 18:14:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:52.963 256+0 records in 00:06:52.963 256+0 records out 00:06:52.963 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0327824 s, 32.0 MB/s 00:06:52.963 18:14:04 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:52.963 18:14:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.963 18:14:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:52.963 18:14:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:52.963 18:14:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:52.963 18:14:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:52.963 18:14:04 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:52.963 18:14:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:52.963 18:14:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:52.963 18:14:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:52.963 18:14:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:52.963 18:14:04 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:52.963 18:14:04 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:52.963 18:14:04 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.963 18:14:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.963 18:14:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:52.963 18:14:04 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:52.963 18:14:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.963 18:14:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:53.221 18:14:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:53.221 18:14:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:53.221 18:14:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:53.221 18:14:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.221 18:14:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.221 18:14:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:53.221 18:14:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:53.221 18:14:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.221 18:14:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:53.221 18:14:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:53.479 18:14:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:53.479 18:14:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:53.479 18:14:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:53.479 18:14:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.479 18:14:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.479 18:14:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:53.479 18:14:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:53.479 18:14:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.479 18:14:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:53.479 18:14:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.479 18:14:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:53.736 18:14:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:53.736 18:14:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:53.736 18:14:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:53.994 18:14:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:53.994 18:14:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:53.994 18:14:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:53.994 18:14:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:53.994 18:14:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:53.994 18:14:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:53.994 18:14:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:53.994 18:14:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:53.994 18:14:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:53.994 18:14:05 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:54.252 18:14:06 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:55.654 [2024-07-22 18:14:07.368805] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:55.654 [2024-07-22 18:14:07.595961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.654 [2024-07-22 18:14:07.595966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.913 [2024-07-22 18:14:07.788514] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:55.913 [2024-07-22 18:14:07.788610] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:55.913 [2024-07-22 18:14:07.788635] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:57.288 spdk_app_start Round 1 00:06:57.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:57.288 18:14:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:57.288 18:14:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:57.288 18:14:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 61331 /var/tmp/spdk-nbd.sock 00:06:57.288 18:14:09 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 61331 ']' 00:06:57.288 18:14:09 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:57.288 18:14:09 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:57.288 18:14:09 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:57.288 18:14:09 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:57.288 18:14:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:57.547 18:14:09 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:57.547 18:14:09 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:57.547 18:14:09 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:57.805 Malloc0 00:06:57.805 18:14:09 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:58.078 Malloc1 00:06:58.078 18:14:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:58.078 18:14:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.078 18:14:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:58.078 18:14:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:58.078 18:14:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.078 18:14:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:58.078 18:14:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:58.078 18:14:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.078 18:14:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:58.078 18:14:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:58.078 18:14:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.078 18:14:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:58.078 18:14:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:58.078 18:14:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:58.078 18:14:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:58.078 18:14:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:58.348 /dev/nbd0 00:06:58.348 18:14:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:58.348 18:14:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:58.348 18:14:10 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:58.348 18:14:10 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:58.348 18:14:10 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:58.348 18:14:10 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:58.348 18:14:10 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:58.348 18:14:10 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:58.348 18:14:10 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:58.348 18:14:10 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:58.348 18:14:10 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:58.348 1+0 records in 00:06:58.348 1+0 records out 00:06:58.348 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000196159 s, 20.9 MB/s 00:06:58.348 18:14:10 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:58.348 18:14:10 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:58.348 18:14:10 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:58.349 18:14:10 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:58.349 18:14:10 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:58.349 18:14:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:58.349 18:14:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:58.349 18:14:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:58.609 /dev/nbd1 00:06:58.868 18:14:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:58.868 18:14:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:58.868 18:14:10 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:58.868 18:14:10 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:58.868 18:14:10 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:58.868 18:14:10 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:58.868 18:14:10 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:58.868 18:14:10 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:58.868 18:14:10 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:58.868 18:14:10 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:58.868 18:14:10 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:58.868 1+0 records in 00:06:58.868 1+0 records out 00:06:58.868 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268527 s, 15.3 MB/s 00:06:58.868 18:14:10 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:58.868 18:14:10 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:58.868 18:14:10 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:58.868 18:14:10 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:58.868 18:14:10 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:58.868 18:14:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:58.868 18:14:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:58.868 18:14:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:58.868 18:14:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.868 18:14:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:59.126 18:14:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:59.126 { 00:06:59.126 "nbd_device": "/dev/nbd0", 00:06:59.126 "bdev_name": "Malloc0" 00:06:59.126 }, 00:06:59.126 { 00:06:59.126 "nbd_device": "/dev/nbd1", 00:06:59.126 "bdev_name": "Malloc1" 00:06:59.126 } 00:06:59.126 ]' 00:06:59.126 18:14:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:59.126 { 00:06:59.126 "nbd_device": "/dev/nbd0", 00:06:59.126 "bdev_name": "Malloc0" 00:06:59.126 }, 00:06:59.126 { 00:06:59.126 "nbd_device": "/dev/nbd1", 00:06:59.126 "bdev_name": "Malloc1" 00:06:59.126 } 00:06:59.126 ]' 00:06:59.126 18:14:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:59.126 18:14:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:59.126 /dev/nbd1' 00:06:59.126 18:14:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:59.126 /dev/nbd1' 00:06:59.126 18:14:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:59.126 18:14:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:59.126 18:14:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:59.126 18:14:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:59.126 18:14:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:59.126 18:14:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:59.126 18:14:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.126 18:14:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:59.126 18:14:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:59.126 18:14:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:59.126 18:14:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:59.127 18:14:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:59.127 256+0 records in 00:06:59.127 256+0 records out 00:06:59.127 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00771416 s, 136 MB/s 00:06:59.127 18:14:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:59.127 18:14:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:59.127 256+0 records in 00:06:59.127 256+0 records out 00:06:59.127 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.027247 s, 38.5 MB/s 00:06:59.127 18:14:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:59.127 18:14:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:59.127 256+0 records in 00:06:59.127 256+0 records out 00:06:59.127 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0353225 s, 29.7 MB/s 00:06:59.127 18:14:11 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:59.127 18:14:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.127 18:14:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:59.127 18:14:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:59.127 18:14:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:59.127 18:14:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:59.127 18:14:11 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:59.127 18:14:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:59.127 18:14:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:59.127 18:14:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:59.127 18:14:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:59.127 18:14:11 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:59.127 18:14:11 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:59.127 18:14:11 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.127 18:14:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.127 18:14:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:59.127 18:14:11 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:59.127 18:14:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:59.127 18:14:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:59.385 18:14:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:59.385 18:14:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:59.385 18:14:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:59.385 18:14:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:59.385 18:14:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:59.385 18:14:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:59.385 18:14:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:59.385 18:14:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:59.385 18:14:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:59.385 18:14:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:59.643 18:14:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:59.643 18:14:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:59.643 18:14:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:59.643 18:14:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:59.643 18:14:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:59.643 18:14:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:59.644 18:14:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:59.644 18:14:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:59.644 18:14:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:59.644 18:14:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.644 18:14:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:00.210 18:14:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:00.210 18:14:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:00.210 18:14:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:00.210 18:14:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:00.210 18:14:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:00.210 18:14:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:00.210 18:14:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:00.210 18:14:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:00.210 18:14:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:00.210 18:14:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:00.210 18:14:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:00.210 18:14:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:00.210 18:14:11 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:00.468 18:14:12 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:01.849 [2024-07-22 18:14:13.589659] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:01.849 [2024-07-22 18:14:13.824276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.849 [2024-07-22 18:14:13.824278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.106 [2024-07-22 18:14:14.012448] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:02.106 [2024-07-22 18:14:14.012586] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:02.106 [2024-07-22 18:14:14.012609] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:03.480 spdk_app_start Round 2 00:07:03.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:03.480 18:14:15 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:03.480 18:14:15 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:03.480 18:14:15 event.app_repeat -- event/event.sh@25 -- # waitforlisten 61331 /var/tmp/spdk-nbd.sock 00:07:03.480 18:14:15 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 61331 ']' 00:07:03.480 18:14:15 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:03.480 18:14:15 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:03.480 18:14:15 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:03.481 18:14:15 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:03.481 18:14:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:03.743 18:14:15 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:03.743 18:14:15 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:03.743 18:14:15 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:04.010 Malloc0 00:07:04.010 18:14:16 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:04.576 Malloc1 00:07:04.576 18:14:16 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:04.576 18:14:16 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.576 18:14:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:04.576 18:14:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:04.576 18:14:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.576 18:14:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:04.576 18:14:16 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:04.576 18:14:16 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.576 18:14:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:04.576 18:14:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:04.576 18:14:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.576 18:14:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:04.577 18:14:16 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:04.577 18:14:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:04.577 18:14:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:04.577 18:14:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:04.835 /dev/nbd0 00:07:04.835 18:14:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:04.835 18:14:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:04.835 18:14:16 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:04.835 18:14:16 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:04.835 18:14:16 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:04.835 18:14:16 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:04.835 18:14:16 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:04.835 18:14:16 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:04.835 18:14:16 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:04.835 18:14:16 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:04.835 18:14:16 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:04.835 1+0 records in 00:07:04.835 1+0 records out 00:07:04.835 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000338873 s, 12.1 MB/s 00:07:04.835 18:14:16 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:04.835 18:14:16 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:04.835 18:14:16 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:04.835 18:14:16 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:04.835 18:14:16 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:04.835 18:14:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:04.835 18:14:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:04.835 18:14:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:05.094 /dev/nbd1 00:07:05.094 18:14:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:05.094 18:14:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:05.094 18:14:16 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:05.094 18:14:16 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:05.094 18:14:16 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:05.094 18:14:16 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:05.094 18:14:16 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:05.094 18:14:16 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:05.094 18:14:16 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:05.094 18:14:16 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:05.094 18:14:16 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:05.094 1+0 records in 00:07:05.094 1+0 records out 00:07:05.094 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000288074 s, 14.2 MB/s 00:07:05.094 18:14:16 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:05.094 18:14:16 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:05.094 18:14:16 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:05.094 18:14:16 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:05.094 18:14:16 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:05.094 18:14:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:05.094 18:14:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:05.094 18:14:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:05.094 18:14:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.094 18:14:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:05.354 18:14:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:05.354 { 00:07:05.354 "nbd_device": "/dev/nbd0", 00:07:05.354 "bdev_name": "Malloc0" 00:07:05.354 }, 00:07:05.354 { 00:07:05.354 "nbd_device": "/dev/nbd1", 00:07:05.354 "bdev_name": "Malloc1" 00:07:05.354 } 00:07:05.354 ]' 00:07:05.354 18:14:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:05.354 18:14:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:05.354 { 00:07:05.354 "nbd_device": "/dev/nbd0", 00:07:05.354 "bdev_name": "Malloc0" 00:07:05.354 }, 00:07:05.354 { 00:07:05.354 "nbd_device": "/dev/nbd1", 00:07:05.354 "bdev_name": "Malloc1" 00:07:05.354 } 00:07:05.354 ]' 00:07:05.354 18:14:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:05.354 /dev/nbd1' 00:07:05.354 18:14:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:05.354 /dev/nbd1' 00:07:05.354 18:14:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:05.354 18:14:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:05.354 18:14:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:05.354 18:14:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:05.354 18:14:17 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:05.354 18:14:17 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:05.354 18:14:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:05.354 18:14:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:05.354 18:14:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:05.354 18:14:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:05.354 18:14:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:05.354 18:14:17 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:05.354 256+0 records in 00:07:05.354 256+0 records out 00:07:05.354 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00830622 s, 126 MB/s 00:07:05.354 18:14:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:05.354 18:14:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:05.354 256+0 records in 00:07:05.354 256+0 records out 00:07:05.354 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0258048 s, 40.6 MB/s 00:07:05.354 18:14:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:05.354 18:14:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:05.354 256+0 records in 00:07:05.354 256+0 records out 00:07:05.354 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0297901 s, 35.2 MB/s 00:07:05.354 18:14:17 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:05.354 18:14:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:05.354 18:14:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:05.354 18:14:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:05.354 18:14:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:05.354 18:14:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:05.354 18:14:17 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:05.354 18:14:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:05.354 18:14:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:05.354 18:14:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:05.354 18:14:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:05.354 18:14:17 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:05.354 18:14:17 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:05.354 18:14:17 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.354 18:14:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:05.354 18:14:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:05.354 18:14:17 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:05.354 18:14:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.354 18:14:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:05.612 18:14:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:05.612 18:14:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:05.612 18:14:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:05.612 18:14:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.612 18:14:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.612 18:14:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:05.612 18:14:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:05.612 18:14:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.612 18:14:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.612 18:14:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:06.179 18:14:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:06.179 18:14:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:06.179 18:14:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:06.179 18:14:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.179 18:14:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.179 18:14:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:06.179 18:14:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:06.179 18:14:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.179 18:14:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:06.179 18:14:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.179 18:14:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:06.179 18:14:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:06.179 18:14:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:06.179 18:14:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:06.476 18:14:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:06.476 18:14:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:06.476 18:14:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:06.476 18:14:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:06.476 18:14:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:06.476 18:14:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:06.476 18:14:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:06.476 18:14:18 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:06.476 18:14:18 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:06.476 18:14:18 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:06.734 18:14:18 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:08.109 [2024-07-22 18:14:19.787158] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:08.109 [2024-07-22 18:14:20.008813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.109 [2024-07-22 18:14:20.008817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.368 [2024-07-22 18:14:20.194529] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:08.368 [2024-07-22 18:14:20.194666] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:08.368 [2024-07-22 18:14:20.194691] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:09.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:09.741 18:14:21 event.app_repeat -- event/event.sh@38 -- # waitforlisten 61331 /var/tmp/spdk-nbd.sock 00:07:09.741 18:14:21 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 61331 ']' 00:07:09.741 18:14:21 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:09.741 18:14:21 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:09.741 18:14:21 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:09.741 18:14:21 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:09.741 18:14:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:09.999 18:14:21 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:09.999 18:14:21 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:09.999 18:14:21 event.app_repeat -- event/event.sh@39 -- # killprocess 61331 00:07:09.999 18:14:21 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 61331 ']' 00:07:09.999 18:14:21 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 61331 00:07:09.999 18:14:21 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:07:09.999 18:14:21 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:09.999 18:14:21 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61331 00:07:09.999 killing process with pid 61331 00:07:09.999 18:14:21 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:09.999 18:14:21 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:09.999 18:14:21 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61331' 00:07:09.999 18:14:21 event.app_repeat -- common/autotest_common.sh@967 -- # kill 61331 00:07:09.999 18:14:21 event.app_repeat -- common/autotest_common.sh@972 -- # wait 61331 00:07:11.375 spdk_app_start is called in Round 0. 00:07:11.375 Shutdown signal received, stop current app iteration 00:07:11.375 Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 reinitialization... 00:07:11.375 spdk_app_start is called in Round 1. 00:07:11.375 Shutdown signal received, stop current app iteration 00:07:11.375 Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 reinitialization... 00:07:11.375 spdk_app_start is called in Round 2. 00:07:11.375 Shutdown signal received, stop current app iteration 00:07:11.375 Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 reinitialization... 00:07:11.375 spdk_app_start is called in Round 3. 00:07:11.375 Shutdown signal received, stop current app iteration 00:07:11.375 18:14:23 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:11.375 18:14:23 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:11.375 00:07:11.375 real 0m20.683s 00:07:11.375 user 0m44.135s 00:07:11.375 sys 0m2.926s 00:07:11.375 18:14:23 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.375 18:14:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:11.375 ************************************ 00:07:11.375 END TEST app_repeat 00:07:11.375 ************************************ 00:07:11.375 18:14:23 event -- common/autotest_common.sh@1142 -- # return 0 00:07:11.375 18:14:23 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:11.375 18:14:23 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:11.375 18:14:23 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:11.375 18:14:23 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.375 18:14:23 event -- common/autotest_common.sh@10 -- # set +x 00:07:11.375 ************************************ 00:07:11.375 START TEST cpu_locks 00:07:11.375 ************************************ 00:07:11.375 18:14:23 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:11.375 * Looking for test storage... 00:07:11.375 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:11.375 18:14:23 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:11.375 18:14:23 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:11.375 18:14:23 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:11.375 18:14:23 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:11.375 18:14:23 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:11.375 18:14:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.375 18:14:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:11.375 ************************************ 00:07:11.375 START TEST default_locks 00:07:11.375 ************************************ 00:07:11.375 18:14:23 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:07:11.375 18:14:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=61789 00:07:11.375 18:14:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 61789 00:07:11.375 18:14:23 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 61789 ']' 00:07:11.375 18:14:23 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.375 18:14:23 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:11.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.375 18:14:23 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.375 18:14:23 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:11.375 18:14:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:11.375 18:14:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:11.375 [2024-07-22 18:14:23.290651] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:11.375 [2024-07-22 18:14:23.290857] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61789 ] 00:07:11.634 [2024-07-22 18:14:23.465891] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.893 [2024-07-22 18:14:23.742378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.182 [2024-07-22 18:14:23.957708] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:12.749 18:14:24 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:12.749 18:14:24 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:07:12.749 18:14:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 61789 00:07:12.749 18:14:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:12.749 18:14:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 61789 00:07:13.007 18:14:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 61789 00:07:13.007 18:14:25 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 61789 ']' 00:07:13.007 18:14:25 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 61789 00:07:13.007 18:14:25 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:07:13.007 18:14:25 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:13.007 18:14:25 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61789 00:07:13.266 18:14:25 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:13.266 killing process with pid 61789 00:07:13.266 18:14:25 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:13.266 18:14:25 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61789' 00:07:13.266 18:14:25 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 61789 00:07:13.266 18:14:25 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 61789 00:07:15.796 18:14:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 61789 00:07:15.796 18:14:27 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:07:15.796 18:14:27 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 61789 00:07:15.797 18:14:27 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:15.797 18:14:27 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:15.797 18:14:27 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:15.797 18:14:27 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:15.797 18:14:27 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 61789 00:07:15.797 18:14:27 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 61789 ']' 00:07:15.797 18:14:27 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.797 18:14:27 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:15.797 18:14:27 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.797 18:14:27 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:15.797 18:14:27 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:15.797 ERROR: process (pid: 61789) is no longer running 00:07:15.797 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (61789) - No such process 00:07:15.797 18:14:27 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:15.797 18:14:27 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:07:15.797 18:14:27 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:07:15.797 18:14:27 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:15.797 18:14:27 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:15.797 18:14:27 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:15.797 18:14:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:15.797 18:14:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:15.797 18:14:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:15.797 ************************************ 00:07:15.797 END TEST default_locks 00:07:15.797 18:14:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:15.797 00:07:15.797 real 0m4.117s 00:07:15.797 user 0m4.091s 00:07:15.797 sys 0m0.724s 00:07:15.797 18:14:27 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.797 18:14:27 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:15.797 ************************************ 00:07:15.797 18:14:27 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:15.797 18:14:27 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:15.797 18:14:27 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:15.797 18:14:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.797 18:14:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:15.797 ************************************ 00:07:15.797 START TEST default_locks_via_rpc 00:07:15.797 ************************************ 00:07:15.797 18:14:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:07:15.797 18:14:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=61864 00:07:15.797 18:14:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 61864 00:07:15.797 18:14:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 61864 ']' 00:07:15.797 18:14:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.797 18:14:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:15.797 18:14:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:15.797 18:14:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.797 18:14:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:15.797 18:14:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.797 [2024-07-22 18:14:27.452168] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:15.797 [2024-07-22 18:14:27.452372] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61864 ] 00:07:15.797 [2024-07-22 18:14:27.632406] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.055 [2024-07-22 18:14:27.923763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.313 [2024-07-22 18:14:28.129926] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:16.879 18:14:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:16.879 18:14:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:16.879 18:14:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:16.879 18:14:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.879 18:14:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.879 18:14:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.879 18:14:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:16.879 18:14:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:16.879 18:14:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:16.879 18:14:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:16.879 18:14:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:16.879 18:14:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.879 18:14:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.879 18:14:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.879 18:14:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 61864 00:07:16.879 18:14:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 61864 00:07:16.879 18:14:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:17.137 18:14:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 61864 00:07:17.137 18:14:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 61864 ']' 00:07:17.137 18:14:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 61864 00:07:17.137 18:14:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:07:17.137 18:14:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:17.137 18:14:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61864 00:07:17.137 18:14:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:17.137 18:14:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:17.137 killing process with pid 61864 00:07:17.137 18:14:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61864' 00:07:17.137 18:14:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 61864 00:07:17.137 18:14:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 61864 00:07:19.664 00:07:19.664 real 0m3.984s 00:07:19.664 user 0m3.952s 00:07:19.664 sys 0m0.707s 00:07:19.664 18:14:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.664 18:14:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.664 ************************************ 00:07:19.664 END TEST default_locks_via_rpc 00:07:19.664 ************************************ 00:07:19.664 18:14:31 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:19.664 18:14:31 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:19.664 18:14:31 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:19.664 18:14:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.664 18:14:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:19.664 ************************************ 00:07:19.664 START TEST non_locking_app_on_locked_coremask 00:07:19.664 ************************************ 00:07:19.664 18:14:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:07:19.664 18:14:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=61936 00:07:19.664 18:14:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:19.664 18:14:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 61936 /var/tmp/spdk.sock 00:07:19.664 18:14:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 61936 ']' 00:07:19.664 18:14:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.664 18:14:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:19.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.664 18:14:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.664 18:14:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:19.664 18:14:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:19.664 [2024-07-22 18:14:31.473457] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:19.664 [2024-07-22 18:14:31.473605] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61936 ] 00:07:19.664 [2024-07-22 18:14:31.637348] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.923 [2024-07-22 18:14:31.871108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.181 [2024-07-22 18:14:32.064234] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:20.747 18:14:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:20.747 18:14:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:20.747 18:14:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=61957 00:07:20.747 18:14:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:20.747 18:14:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 61957 /var/tmp/spdk2.sock 00:07:20.747 18:14:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 61957 ']' 00:07:20.747 18:14:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:20.747 18:14:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:20.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:20.747 18:14:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:20.747 18:14:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:20.747 18:14:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:21.006 [2024-07-22 18:14:32.809046] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:21.006 [2024-07-22 18:14:32.809288] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61957 ] 00:07:21.006 [2024-07-22 18:14:32.993729] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:21.006 [2024-07-22 18:14:32.993814] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.577 [2024-07-22 18:14:33.478866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.143 [2024-07-22 18:14:33.885686] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:23.520 18:14:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:23.520 18:14:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:23.520 18:14:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 61936 00:07:23.520 18:14:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61936 00:07:23.520 18:14:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:24.520 18:14:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 61936 00:07:24.520 18:14:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 61936 ']' 00:07:24.520 18:14:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 61936 00:07:24.520 18:14:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:24.520 18:14:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:24.520 18:14:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61936 00:07:24.520 killing process with pid 61936 00:07:24.520 18:14:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:24.520 18:14:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:24.520 18:14:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61936' 00:07:24.520 18:14:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 61936 00:07:24.520 18:14:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 61936 00:07:29.782 18:14:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 61957 00:07:29.782 18:14:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 61957 ']' 00:07:29.782 18:14:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 61957 00:07:29.782 18:14:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:29.782 18:14:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:29.782 18:14:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61957 00:07:29.783 killing process with pid 61957 00:07:29.783 18:14:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:29.783 18:14:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:29.783 18:14:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61957' 00:07:29.783 18:14:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 61957 00:07:29.783 18:14:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 61957 00:07:31.155 00:07:31.155 real 0m11.609s 00:07:31.155 user 0m12.074s 00:07:31.155 sys 0m1.563s 00:07:31.155 18:14:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:31.155 18:14:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:31.155 ************************************ 00:07:31.155 END TEST non_locking_app_on_locked_coremask 00:07:31.155 ************************************ 00:07:31.155 18:14:43 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:31.155 18:14:43 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:31.155 18:14:43 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:31.155 18:14:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.155 18:14:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:31.155 ************************************ 00:07:31.155 START TEST locking_app_on_unlocked_coremask 00:07:31.155 ************************************ 00:07:31.155 18:14:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:07:31.155 18:14:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=62101 00:07:31.155 18:14:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:31.155 18:14:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 62101 /var/tmp/spdk.sock 00:07:31.155 18:14:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 62101 ']' 00:07:31.155 18:14:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.155 18:14:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:31.155 18:14:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.155 18:14:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:31.155 18:14:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:31.155 [2024-07-22 18:14:43.150242] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:31.155 [2024-07-22 18:14:43.150708] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62101 ] 00:07:31.424 [2024-07-22 18:14:43.324331] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:31.424 [2024-07-22 18:14:43.324620] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.683 [2024-07-22 18:14:43.565635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.941 [2024-07-22 18:14:43.768058] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:32.506 18:14:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:32.506 18:14:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:32.506 18:14:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:32.506 18:14:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=62122 00:07:32.506 18:14:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 62122 /var/tmp/spdk2.sock 00:07:32.506 18:14:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 62122 ']' 00:07:32.506 18:14:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:32.506 18:14:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:32.506 18:14:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:32.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:32.506 18:14:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:32.506 18:14:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:32.506 [2024-07-22 18:14:44.464566] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:32.506 [2024-07-22 18:14:44.464732] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62122 ] 00:07:32.764 [2024-07-22 18:14:44.634164] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.329 [2024-07-22 18:14:45.111688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.587 [2024-07-22 18:14:45.515950] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:35.524 18:14:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:35.524 18:14:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:35.524 18:14:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 62122 00:07:35.524 18:14:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:35.524 18:14:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 62122 00:07:36.089 18:14:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 62101 00:07:36.089 18:14:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 62101 ']' 00:07:36.089 18:14:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 62101 00:07:36.089 18:14:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:36.089 18:14:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:36.089 18:14:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62101 00:07:36.089 18:14:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:36.090 18:14:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:36.090 killing process with pid 62101 00:07:36.090 18:14:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62101' 00:07:36.090 18:14:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 62101 00:07:36.090 18:14:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 62101 00:07:41.413 18:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 62122 00:07:41.413 18:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 62122 ']' 00:07:41.413 18:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 62122 00:07:41.413 18:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:41.413 18:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:41.413 18:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62122 00:07:41.413 killing process with pid 62122 00:07:41.413 18:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:41.413 18:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:41.413 18:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62122' 00:07:41.413 18:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 62122 00:07:41.413 18:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 62122 00:07:42.788 ************************************ 00:07:42.788 END TEST locking_app_on_unlocked_coremask 00:07:42.788 ************************************ 00:07:42.788 00:07:42.788 real 0m11.623s 00:07:42.788 user 0m12.016s 00:07:42.788 sys 0m1.475s 00:07:42.788 18:14:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:42.788 18:14:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:42.788 18:14:54 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:42.788 18:14:54 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:42.788 18:14:54 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:42.788 18:14:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.788 18:14:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:42.788 ************************************ 00:07:42.788 START TEST locking_app_on_locked_coremask 00:07:42.788 ************************************ 00:07:42.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.788 18:14:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:07:42.788 18:14:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=62266 00:07:42.788 18:14:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:42.788 18:14:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 62266 /var/tmp/spdk.sock 00:07:42.788 18:14:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 62266 ']' 00:07:42.788 18:14:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.788 18:14:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:42.788 18:14:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.788 18:14:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:42.788 18:14:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:43.046 [2024-07-22 18:14:54.829069] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:43.046 [2024-07-22 18:14:54.829278] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62266 ] 00:07:43.046 [2024-07-22 18:14:55.005848] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.304 [2024-07-22 18:14:55.245646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.563 [2024-07-22 18:14:55.444939] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:44.131 18:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:44.131 18:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:44.131 18:14:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=62292 00:07:44.131 18:14:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 62292 /var/tmp/spdk2.sock 00:07:44.131 18:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:44.131 18:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 62292 /var/tmp/spdk2.sock 00:07:44.131 18:14:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:44.131 18:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:44.131 18:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:44.131 18:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:44.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:44.131 18:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:44.131 18:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 62292 /var/tmp/spdk2.sock 00:07:44.131 18:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 62292 ']' 00:07:44.131 18:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:44.131 18:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:44.131 18:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:44.131 18:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:44.131 18:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:44.422 [2024-07-22 18:14:56.158155] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:44.422 [2024-07-22 18:14:56.158392] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62292 ] 00:07:44.422 [2024-07-22 18:14:56.340137] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 62266 has claimed it. 00:07:44.422 [2024-07-22 18:14:56.344321] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:44.990 ERROR: process (pid: 62292) is no longer running 00:07:44.990 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (62292) - No such process 00:07:44.990 18:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:44.990 18:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:44.990 18:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:44.990 18:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:44.990 18:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:44.990 18:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:44.990 18:14:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 62266 00:07:44.990 18:14:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 62266 00:07:44.990 18:14:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:45.249 18:14:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 62266 00:07:45.249 18:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 62266 ']' 00:07:45.249 18:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 62266 00:07:45.249 18:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:45.249 18:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:45.249 18:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62266 00:07:45.507 killing process with pid 62266 00:07:45.507 18:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:45.507 18:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:45.507 18:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62266' 00:07:45.507 18:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 62266 00:07:45.507 18:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 62266 00:07:48.038 ************************************ 00:07:48.038 END TEST locking_app_on_locked_coremask 00:07:48.038 ************************************ 00:07:48.038 00:07:48.038 real 0m4.745s 00:07:48.038 user 0m5.047s 00:07:48.038 sys 0m0.909s 00:07:48.038 18:14:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:48.038 18:14:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:48.038 18:14:59 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:48.039 18:14:59 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:48.039 18:14:59 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:48.039 18:14:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:48.039 18:14:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:48.039 ************************************ 00:07:48.039 START TEST locking_overlapped_coremask 00:07:48.039 ************************************ 00:07:48.039 18:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:07:48.039 18:14:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=62356 00:07:48.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.039 18:14:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 62356 /var/tmp/spdk.sock 00:07:48.039 18:14:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:48.039 18:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 62356 ']' 00:07:48.039 18:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.039 18:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:48.039 18:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.039 18:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:48.039 18:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:48.039 [2024-07-22 18:14:59.646765] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:48.039 [2024-07-22 18:14:59.646955] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62356 ] 00:07:48.039 [2024-07-22 18:14:59.822894] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:48.298 [2024-07-22 18:15:00.076896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.298 [2024-07-22 18:15:00.077091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.298 [2024-07-22 18:15:00.077102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:48.298 [2024-07-22 18:15:00.281921] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:49.257 18:15:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:49.257 18:15:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:49.257 18:15:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=62380 00:07:49.257 18:15:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 62380 /var/tmp/spdk2.sock 00:07:49.257 18:15:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:49.257 18:15:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:49.257 18:15:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 62380 /var/tmp/spdk2.sock 00:07:49.257 18:15:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:49.257 18:15:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:49.257 18:15:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:49.257 18:15:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:49.257 18:15:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 62380 /var/tmp/spdk2.sock 00:07:49.257 18:15:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 62380 ']' 00:07:49.258 18:15:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:49.258 18:15:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:49.258 18:15:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:49.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:49.258 18:15:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:49.258 18:15:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:49.258 [2024-07-22 18:15:01.008636] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:49.258 [2024-07-22 18:15:01.009059] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62380 ] 00:07:49.258 [2024-07-22 18:15:01.188889] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 62356 has claimed it. 00:07:49.258 [2024-07-22 18:15:01.188977] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:49.840 ERROR: process (pid: 62380) is no longer running 00:07:49.840 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (62380) - No such process 00:07:49.840 18:15:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:49.840 18:15:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:49.840 18:15:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:49.840 18:15:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:49.840 18:15:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:49.840 18:15:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:49.840 18:15:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:49.840 18:15:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:49.840 18:15:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:49.840 18:15:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:49.840 18:15:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 62356 00:07:49.840 18:15:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 62356 ']' 00:07:49.840 18:15:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 62356 00:07:49.840 18:15:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:07:49.840 18:15:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:49.840 18:15:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62356 00:07:49.840 18:15:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:49.840 18:15:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:49.840 18:15:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62356' 00:07:49.840 killing process with pid 62356 00:07:49.840 18:15:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 62356 00:07:49.840 18:15:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 62356 00:07:52.376 00:07:52.376 real 0m4.415s 00:07:52.376 user 0m11.468s 00:07:52.376 sys 0m0.696s 00:07:52.376 18:15:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:52.376 18:15:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:52.376 ************************************ 00:07:52.376 END TEST locking_overlapped_coremask 00:07:52.376 ************************************ 00:07:52.377 18:15:03 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:52.377 18:15:03 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:52.377 18:15:03 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:52.377 18:15:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.377 18:15:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:52.377 ************************************ 00:07:52.377 START TEST locking_overlapped_coremask_via_rpc 00:07:52.377 ************************************ 00:07:52.377 18:15:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:07:52.377 18:15:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=62444 00:07:52.377 18:15:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 62444 /var/tmp/spdk.sock 00:07:52.377 18:15:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 62444 ']' 00:07:52.377 18:15:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.377 18:15:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:52.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.377 18:15:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.377 18:15:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:52.377 18:15:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:52.377 18:15:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.377 [2024-07-22 18:15:04.099664] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:52.377 [2024-07-22 18:15:04.099871] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62444 ] 00:07:52.377 [2024-07-22 18:15:04.274129] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:52.377 [2024-07-22 18:15:04.274199] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:52.636 [2024-07-22 18:15:04.512277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.636 [2024-07-22 18:15:04.512375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.636 [2024-07-22 18:15:04.512395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:52.894 [2024-07-22 18:15:04.712088] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:53.460 18:15:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:53.460 18:15:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:53.460 18:15:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:53.460 18:15:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=62462 00:07:53.460 18:15:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 62462 /var/tmp/spdk2.sock 00:07:53.460 18:15:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 62462 ']' 00:07:53.460 18:15:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:53.460 18:15:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:53.460 18:15:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:53.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:53.460 18:15:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:53.460 18:15:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:53.460 [2024-07-22 18:15:05.450444] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:53.461 [2024-07-22 18:15:05.450642] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62462 ] 00:07:53.719 [2024-07-22 18:15:05.635744] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:53.719 [2024-07-22 18:15:05.635839] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:54.285 [2024-07-22 18:15:06.135842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:54.285 [2024-07-22 18:15:06.135924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:54.285 [2024-07-22 18:15:06.135937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:54.542 [2024-07-22 18:15:06.547044] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:56.442 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:56.442 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:56.442 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:56.442 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.442 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:56.442 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.442 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:56.442 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:56.443 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:56.443 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:56.443 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:56.443 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:56.443 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:56.443 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:56.443 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.443 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:56.443 [2024-07-22 18:15:08.101461] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 62444 has claimed it. 00:07:56.443 request: 00:07:56.443 { 00:07:56.443 "method": "framework_enable_cpumask_locks", 00:07:56.443 "req_id": 1 00:07:56.443 } 00:07:56.443 Got JSON-RPC error response 00:07:56.443 response: 00:07:56.443 { 00:07:56.443 "code": -32603, 00:07:56.443 "message": "Failed to claim CPU core: 2" 00:07:56.443 } 00:07:56.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.443 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:56.443 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:56.443 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:56.443 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:56.443 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:56.443 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 62444 /var/tmp/spdk.sock 00:07:56.443 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 62444 ']' 00:07:56.443 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.443 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:56.443 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.443 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:56.443 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:56.443 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:56.443 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:56.443 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 62462 /var/tmp/spdk2.sock 00:07:56.443 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 62462 ']' 00:07:56.443 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:56.443 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:56.443 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:56.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:56.443 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:56.443 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:56.701 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:56.701 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:56.701 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:56.701 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:56.701 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:56.701 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:56.701 00:07:56.701 real 0m4.725s 00:07:56.701 user 0m1.646s 00:07:56.701 sys 0m0.212s 00:07:56.701 ************************************ 00:07:56.701 END TEST locking_overlapped_coremask_via_rpc 00:07:56.701 ************************************ 00:07:56.701 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:56.701 18:15:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:56.959 18:15:08 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:56.959 18:15:08 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:56.959 18:15:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 62444 ]] 00:07:56.959 18:15:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 62444 00:07:56.959 18:15:08 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 62444 ']' 00:07:56.959 18:15:08 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 62444 00:07:56.959 18:15:08 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:56.959 18:15:08 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:56.959 18:15:08 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62444 00:07:56.959 killing process with pid 62444 00:07:56.959 18:15:08 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:56.959 18:15:08 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:56.959 18:15:08 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62444' 00:07:56.959 18:15:08 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 62444 00:07:56.959 18:15:08 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 62444 00:07:59.489 18:15:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 62462 ]] 00:07:59.489 18:15:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 62462 00:07:59.489 18:15:11 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 62462 ']' 00:07:59.489 18:15:11 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 62462 00:07:59.489 18:15:11 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:59.489 18:15:11 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:59.489 18:15:11 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62462 00:07:59.489 18:15:11 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:07:59.489 18:15:11 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:07:59.489 killing process with pid 62462 00:07:59.489 18:15:11 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62462' 00:07:59.489 18:15:11 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 62462 00:07:59.489 18:15:11 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 62462 00:08:01.436 18:15:13 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:01.436 Process with pid 62444 is not found 00:08:01.436 Process with pid 62462 is not found 00:08:01.436 18:15:13 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:01.436 18:15:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 62444 ]] 00:08:01.436 18:15:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 62444 00:08:01.436 18:15:13 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 62444 ']' 00:08:01.436 18:15:13 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 62444 00:08:01.436 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (62444) - No such process 00:08:01.436 18:15:13 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 62444 is not found' 00:08:01.436 18:15:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 62462 ]] 00:08:01.436 18:15:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 62462 00:08:01.436 18:15:13 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 62462 ']' 00:08:01.436 18:15:13 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 62462 00:08:01.436 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (62462) - No such process 00:08:01.436 18:15:13 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 62462 is not found' 00:08:01.436 18:15:13 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:01.436 ************************************ 00:08:01.436 END TEST cpu_locks 00:08:01.436 ************************************ 00:08:01.436 00:08:01.436 real 0m50.266s 00:08:01.436 user 1m24.872s 00:08:01.436 sys 0m7.481s 00:08:01.436 18:15:13 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:01.436 18:15:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:01.436 18:15:13 event -- common/autotest_common.sh@1142 -- # return 0 00:08:01.436 ************************************ 00:08:01.436 END TEST event 00:08:01.436 ************************************ 00:08:01.436 00:08:01.436 real 1m22.438s 00:08:01.436 user 2m25.599s 00:08:01.436 sys 0m11.581s 00:08:01.436 18:15:13 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:01.436 18:15:13 event -- common/autotest_common.sh@10 -- # set +x 00:08:01.709 18:15:13 -- common/autotest_common.sh@1142 -- # return 0 00:08:01.709 18:15:13 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:01.709 18:15:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:01.709 18:15:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:01.709 18:15:13 -- common/autotest_common.sh@10 -- # set +x 00:08:01.709 ************************************ 00:08:01.709 START TEST thread 00:08:01.709 ************************************ 00:08:01.709 18:15:13 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:01.709 * Looking for test storage... 00:08:01.709 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:08:01.709 18:15:13 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:01.709 18:15:13 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:08:01.709 18:15:13 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:01.709 18:15:13 thread -- common/autotest_common.sh@10 -- # set +x 00:08:01.709 ************************************ 00:08:01.709 START TEST thread_poller_perf 00:08:01.709 ************************************ 00:08:01.709 18:15:13 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:01.709 [2024-07-22 18:15:13.567800] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:01.709 [2024-07-22 18:15:13.568058] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62649 ] 00:08:01.967 [2024-07-22 18:15:13.738224] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.967 [2024-07-22 18:15:13.975492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.967 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:03.865 ====================================== 00:08:03.865 busy:2209937220 (cyc) 00:08:03.865 total_run_count: 311000 00:08:03.865 tsc_hz: 2200000000 (cyc) 00:08:03.865 ====================================== 00:08:03.865 poller_cost: 7105 (cyc), 3229 (nsec) 00:08:03.865 00:08:03.865 real 0m1.873s 00:08:03.865 user 0m1.641s 00:08:03.865 sys 0m0.120s 00:08:03.865 18:15:15 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:03.865 ************************************ 00:08:03.865 END TEST thread_poller_perf 00:08:03.865 ************************************ 00:08:03.865 18:15:15 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:03.865 18:15:15 thread -- common/autotest_common.sh@1142 -- # return 0 00:08:03.865 18:15:15 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:03.865 18:15:15 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:08:03.865 18:15:15 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.865 18:15:15 thread -- common/autotest_common.sh@10 -- # set +x 00:08:03.865 ************************************ 00:08:03.865 START TEST thread_poller_perf 00:08:03.865 ************************************ 00:08:03.865 18:15:15 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:03.865 [2024-07-22 18:15:15.518586] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:03.866 [2024-07-22 18:15:15.518900] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62691 ] 00:08:03.866 [2024-07-22 18:15:15.721300] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.130 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:04.130 [2024-07-22 18:15:15.959853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.520 ====================================== 00:08:05.520 busy:2204026287 (cyc) 00:08:05.520 total_run_count: 3923000 00:08:05.520 tsc_hz: 2200000000 (cyc) 00:08:05.520 ====================================== 00:08:05.520 poller_cost: 561 (cyc), 255 (nsec) 00:08:05.520 00:08:05.520 real 0m1.911s 00:08:05.520 user 0m1.644s 00:08:05.520 sys 0m0.155s 00:08:05.520 18:15:17 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:05.520 18:15:17 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:05.520 ************************************ 00:08:05.520 END TEST thread_poller_perf 00:08:05.520 ************************************ 00:08:05.520 18:15:17 thread -- common/autotest_common.sh@1142 -- # return 0 00:08:05.520 18:15:17 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:05.520 ************************************ 00:08:05.520 END TEST thread 00:08:05.520 ************************************ 00:08:05.520 00:08:05.520 real 0m3.979s 00:08:05.520 user 0m3.360s 00:08:05.520 sys 0m0.386s 00:08:05.520 18:15:17 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:05.520 18:15:17 thread -- common/autotest_common.sh@10 -- # set +x 00:08:05.520 18:15:17 -- common/autotest_common.sh@1142 -- # return 0 00:08:05.520 18:15:17 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:08:05.520 18:15:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:05.520 18:15:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.520 18:15:17 -- common/autotest_common.sh@10 -- # set +x 00:08:05.520 ************************************ 00:08:05.520 START TEST accel 00:08:05.520 ************************************ 00:08:05.520 18:15:17 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:08:05.520 * Looking for test storage... 00:08:05.779 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:08:05.779 18:15:17 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:08:05.779 18:15:17 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:08:05.779 18:15:17 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:05.779 18:15:17 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=62772 00:08:05.779 18:15:17 accel -- accel/accel.sh@63 -- # waitforlisten 62772 00:08:05.779 18:15:17 accel -- common/autotest_common.sh@829 -- # '[' -z 62772 ']' 00:08:05.779 18:15:17 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.779 18:15:17 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:05.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.779 18:15:17 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.779 18:15:17 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:05.779 18:15:17 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:08:05.779 18:15:17 accel -- common/autotest_common.sh@10 -- # set +x 00:08:05.779 18:15:17 accel -- accel/accel.sh@61 -- # build_accel_config 00:08:05.779 18:15:17 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:05.779 18:15:17 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:05.779 18:15:17 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:05.779 18:15:17 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:05.779 18:15:17 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:05.779 18:15:17 accel -- accel/accel.sh@40 -- # local IFS=, 00:08:05.779 18:15:17 accel -- accel/accel.sh@41 -- # jq -r . 00:08:05.779 [2024-07-22 18:15:17.655188] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:05.779 [2024-07-22 18:15:17.655357] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62772 ] 00:08:06.037 [2024-07-22 18:15:17.818773] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.037 [2024-07-22 18:15:18.052284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.295 [2024-07-22 18:15:18.264507] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:06.862 18:15:18 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:06.862 18:15:18 accel -- common/autotest_common.sh@862 -- # return 0 00:08:06.862 18:15:18 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:08:06.862 18:15:18 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:08:06.862 18:15:18 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:08:06.862 18:15:18 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:08:06.862 18:15:18 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:08:06.862 18:15:18 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:08:06.862 18:15:18 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:08:06.862 18:15:18 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.862 18:15:18 accel -- common/autotest_common.sh@10 -- # set +x 00:08:06.862 18:15:18 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.120 18:15:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:07.120 18:15:18 accel -- accel/accel.sh@72 -- # IFS== 00:08:07.120 18:15:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:07.120 18:15:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:07.120 18:15:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:07.120 18:15:18 accel -- accel/accel.sh@72 -- # IFS== 00:08:07.120 18:15:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:07.120 18:15:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:07.120 18:15:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:07.120 18:15:18 accel -- accel/accel.sh@72 -- # IFS== 00:08:07.120 18:15:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:07.120 18:15:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:07.120 18:15:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:07.120 18:15:18 accel -- accel/accel.sh@72 -- # IFS== 00:08:07.120 18:15:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:07.120 18:15:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:07.120 18:15:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:07.120 18:15:18 accel -- accel/accel.sh@72 -- # IFS== 00:08:07.120 18:15:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:07.120 18:15:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:07.120 18:15:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:07.120 18:15:18 accel -- accel/accel.sh@72 -- # IFS== 00:08:07.120 18:15:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:07.120 18:15:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:07.120 18:15:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:07.120 18:15:18 accel -- accel/accel.sh@72 -- # IFS== 00:08:07.120 18:15:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:07.120 18:15:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:07.120 18:15:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:07.120 18:15:18 accel -- accel/accel.sh@72 -- # IFS== 00:08:07.120 18:15:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:07.120 18:15:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:07.120 18:15:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:07.120 18:15:18 accel -- accel/accel.sh@72 -- # IFS== 00:08:07.120 18:15:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:07.120 18:15:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:07.120 18:15:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:07.120 18:15:18 accel -- accel/accel.sh@72 -- # IFS== 00:08:07.120 18:15:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:07.120 18:15:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:07.120 18:15:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:07.120 18:15:18 accel -- accel/accel.sh@72 -- # IFS== 00:08:07.120 18:15:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:07.120 18:15:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:07.120 18:15:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:07.120 18:15:18 accel -- accel/accel.sh@72 -- # IFS== 00:08:07.120 18:15:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:07.120 18:15:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:07.120 18:15:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:07.120 18:15:18 accel -- accel/accel.sh@72 -- # IFS== 00:08:07.121 18:15:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:07.121 18:15:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:07.121 18:15:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:07.121 18:15:18 accel -- accel/accel.sh@72 -- # IFS== 00:08:07.121 18:15:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:07.121 18:15:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:07.121 18:15:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:07.121 18:15:18 accel -- accel/accel.sh@72 -- # IFS== 00:08:07.121 18:15:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:07.121 18:15:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:07.121 18:15:18 accel -- accel/accel.sh@75 -- # killprocess 62772 00:08:07.121 18:15:18 accel -- common/autotest_common.sh@948 -- # '[' -z 62772 ']' 00:08:07.121 18:15:18 accel -- common/autotest_common.sh@952 -- # kill -0 62772 00:08:07.121 18:15:18 accel -- common/autotest_common.sh@953 -- # uname 00:08:07.121 18:15:18 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:07.121 18:15:18 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62772 00:08:07.121 killing process with pid 62772 00:08:07.121 18:15:18 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:07.121 18:15:18 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:07.121 18:15:18 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62772' 00:08:07.121 18:15:18 accel -- common/autotest_common.sh@967 -- # kill 62772 00:08:07.121 18:15:18 accel -- common/autotest_common.sh@972 -- # wait 62772 00:08:09.662 18:15:21 accel -- accel/accel.sh@76 -- # trap - ERR 00:08:09.662 18:15:21 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:08:09.662 18:15:21 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:09.662 18:15:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.662 18:15:21 accel -- common/autotest_common.sh@10 -- # set +x 00:08:09.662 18:15:21 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:08:09.662 18:15:21 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:08:09.662 18:15:21 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:08:09.662 18:15:21 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:09.662 18:15:21 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:09.662 18:15:21 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:09.662 18:15:21 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:09.662 18:15:21 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:09.662 18:15:21 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:08:09.662 18:15:21 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:08:09.662 18:15:21 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:09.662 18:15:21 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:08:09.662 18:15:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:09.662 18:15:21 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:08:09.662 18:15:21 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:09.662 18:15:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.662 18:15:21 accel -- common/autotest_common.sh@10 -- # set +x 00:08:09.662 ************************************ 00:08:09.662 START TEST accel_missing_filename 00:08:09.662 ************************************ 00:08:09.662 18:15:21 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:08:09.662 18:15:21 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:08:09.662 18:15:21 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:08:09.662 18:15:21 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:09.662 18:15:21 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:09.662 18:15:21 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:09.662 18:15:21 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:09.662 18:15:21 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:08:09.662 18:15:21 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:08:09.662 18:15:21 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:08:09.662 18:15:21 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:09.662 18:15:21 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:09.662 18:15:21 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:09.662 18:15:21 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:09.662 18:15:21 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:09.662 18:15:21 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:08:09.662 18:15:21 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:08:09.662 [2024-07-22 18:15:21.304702] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:09.662 [2024-07-22 18:15:21.304879] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62847 ] 00:08:09.662 [2024-07-22 18:15:21.481677] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.921 [2024-07-22 18:15:21.712144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.921 [2024-07-22 18:15:21.912197] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:10.487 [2024-07-22 18:15:22.395470] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:08:10.782 A filename is required. 00:08:11.040 18:15:22 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:08:11.040 18:15:22 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:11.040 18:15:22 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:08:11.040 18:15:22 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:08:11.040 18:15:22 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:08:11.040 18:15:22 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:11.040 00:08:11.040 real 0m1.559s 00:08:11.040 user 0m1.295s 00:08:11.041 sys 0m0.204s 00:08:11.041 ************************************ 00:08:11.041 END TEST accel_missing_filename 00:08:11.041 ************************************ 00:08:11.041 18:15:22 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:11.041 18:15:22 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:08:11.041 18:15:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:11.041 18:15:22 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:11.041 18:15:22 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:08:11.041 18:15:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:11.041 18:15:22 accel -- common/autotest_common.sh@10 -- # set +x 00:08:11.041 ************************************ 00:08:11.041 START TEST accel_compress_verify 00:08:11.041 ************************************ 00:08:11.041 18:15:22 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:11.041 18:15:22 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:08:11.041 18:15:22 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:11.041 18:15:22 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:11.041 18:15:22 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:11.041 18:15:22 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:11.041 18:15:22 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:11.041 18:15:22 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:11.041 18:15:22 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:11.041 18:15:22 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:08:11.041 18:15:22 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:11.041 18:15:22 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:11.041 18:15:22 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:11.041 18:15:22 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:11.041 18:15:22 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:11.041 18:15:22 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:08:11.041 18:15:22 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:08:11.041 [2024-07-22 18:15:22.917818] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:11.041 [2024-07-22 18:15:22.918020] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62884 ] 00:08:11.300 [2024-07-22 18:15:23.094194] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.559 [2024-07-22 18:15:23.316651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.559 [2024-07-22 18:15:23.520527] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:12.125 [2024-07-22 18:15:23.984422] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:08:12.384 00:08:12.384 Compression does not support the verify option, aborting. 00:08:12.384 18:15:24 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:08:12.384 18:15:24 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:12.384 18:15:24 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:08:12.384 18:15:24 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:08:12.384 18:15:24 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:08:12.384 18:15:24 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:12.384 00:08:12.384 real 0m1.520s 00:08:12.384 user 0m1.261s 00:08:12.384 sys 0m0.202s 00:08:12.384 18:15:24 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:12.384 ************************************ 00:08:12.384 END TEST accel_compress_verify 00:08:12.384 ************************************ 00:08:12.384 18:15:24 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:08:12.642 18:15:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:12.642 18:15:24 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:08:12.642 18:15:24 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:12.642 18:15:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:12.643 18:15:24 accel -- common/autotest_common.sh@10 -- # set +x 00:08:12.643 ************************************ 00:08:12.643 START TEST accel_wrong_workload 00:08:12.643 ************************************ 00:08:12.643 18:15:24 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:08:12.643 18:15:24 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:08:12.643 18:15:24 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:08:12.643 18:15:24 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:12.643 18:15:24 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:12.643 18:15:24 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:12.643 18:15:24 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:12.643 18:15:24 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:08:12.643 18:15:24 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:08:12.643 18:15:24 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:08:12.643 18:15:24 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:12.643 18:15:24 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:12.643 18:15:24 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:12.643 18:15:24 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:12.643 18:15:24 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:12.643 18:15:24 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:08:12.643 18:15:24 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:08:12.643 Unsupported workload type: foobar 00:08:12.643 [2024-07-22 18:15:24.474173] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:08:12.643 accel_perf options: 00:08:12.643 [-h help message] 00:08:12.643 [-q queue depth per core] 00:08:12.643 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:08:12.643 [-T number of threads per core 00:08:12.643 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:08:12.643 [-t time in seconds] 00:08:12.643 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:08:12.643 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:08:12.643 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:08:12.643 [-l for compress/decompress workloads, name of uncompressed input file 00:08:12.643 [-S for crc32c workload, use this seed value (default 0) 00:08:12.643 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:08:12.643 [-f for fill workload, use this BYTE value (default 255) 00:08:12.643 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:08:12.643 [-y verify result if this switch is on] 00:08:12.643 [-a tasks to allocate per core (default: same value as -q)] 00:08:12.643 Can be used to spread operations across a wider range of memory. 00:08:12.643 18:15:24 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:08:12.643 18:15:24 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:12.643 ************************************ 00:08:12.643 END TEST accel_wrong_workload 00:08:12.643 ************************************ 00:08:12.643 18:15:24 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:12.643 18:15:24 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:12.643 00:08:12.643 real 0m0.082s 00:08:12.643 user 0m0.090s 00:08:12.643 sys 0m0.040s 00:08:12.643 18:15:24 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:12.643 18:15:24 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:08:12.643 18:15:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:12.643 18:15:24 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:08:12.643 18:15:24 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:08:12.643 18:15:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:12.643 18:15:24 accel -- common/autotest_common.sh@10 -- # set +x 00:08:12.643 ************************************ 00:08:12.643 START TEST accel_negative_buffers 00:08:12.643 ************************************ 00:08:12.643 18:15:24 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:08:12.643 18:15:24 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:08:12.643 18:15:24 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:08:12.643 18:15:24 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:12.643 18:15:24 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:12.643 18:15:24 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:12.643 18:15:24 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:12.643 18:15:24 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:08:12.643 18:15:24 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:08:12.643 18:15:24 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:08:12.643 18:15:24 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:12.643 18:15:24 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:12.643 18:15:24 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:12.643 18:15:24 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:12.643 18:15:24 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:12.643 18:15:24 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:08:12.643 18:15:24 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:08:12.643 -x option must be non-negative. 00:08:12.643 [2024-07-22 18:15:24.627421] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:08:12.643 accel_perf options: 00:08:12.643 [-h help message] 00:08:12.643 [-q queue depth per core] 00:08:12.643 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:08:12.643 [-T number of threads per core 00:08:12.643 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:08:12.643 [-t time in seconds] 00:08:12.643 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:08:12.643 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:08:12.643 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:08:12.643 [-l for compress/decompress workloads, name of uncompressed input file 00:08:12.643 [-S for crc32c workload, use this seed value (default 0) 00:08:12.643 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:08:12.643 [-f for fill workload, use this BYTE value (default 255) 00:08:12.643 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:08:12.643 [-y verify result if this switch is on] 00:08:12.643 [-a tasks to allocate per core (default: same value as -q)] 00:08:12.643 Can be used to spread operations across a wider range of memory. 00:08:12.902 18:15:24 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:08:12.902 18:15:24 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:12.902 18:15:24 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:12.902 ************************************ 00:08:12.902 END TEST accel_negative_buffers 00:08:12.902 ************************************ 00:08:12.902 18:15:24 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:12.902 00:08:12.902 real 0m0.107s 00:08:12.902 user 0m0.129s 00:08:12.902 sys 0m0.052s 00:08:12.902 18:15:24 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:12.902 18:15:24 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:08:12.902 18:15:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:12.902 18:15:24 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:08:12.902 18:15:24 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:12.902 18:15:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:12.902 18:15:24 accel -- common/autotest_common.sh@10 -- # set +x 00:08:12.902 ************************************ 00:08:12.902 START TEST accel_crc32c 00:08:12.902 ************************************ 00:08:12.902 18:15:24 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:08:12.902 18:15:24 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:08:12.902 18:15:24 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:08:12.902 18:15:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:12.902 18:15:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:12.902 18:15:24 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:08:12.902 18:15:24 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:08:12.903 18:15:24 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:08:12.903 18:15:24 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:12.903 18:15:24 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:12.903 18:15:24 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:12.903 18:15:24 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:12.903 18:15:24 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:12.903 18:15:24 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:08:12.903 18:15:24 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:08:12.903 [2024-07-22 18:15:24.768526] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:12.903 [2024-07-22 18:15:24.768722] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62962 ] 00:08:13.162 [2024-07-22 18:15:24.938851] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.421 [2024-07-22 18:15:25.209163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:13.421 18:15:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.322 18:15:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:15.322 18:15:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.322 18:15:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.322 18:15:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.322 18:15:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:15.322 18:15:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.322 18:15:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.322 18:15:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.322 18:15:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:15.322 18:15:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.322 18:15:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.322 18:15:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.322 18:15:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:15.322 18:15:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.322 18:15:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.322 18:15:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.322 18:15:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:15.322 18:15:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.322 18:15:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.322 18:15:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.322 18:15:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:15.322 18:15:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:15.322 18:15:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:15.322 18:15:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:15.322 18:15:27 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:15.322 18:15:27 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:08:15.322 18:15:27 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:15.322 00:08:15.322 real 0m2.593s 00:08:15.322 user 0m2.287s 00:08:15.322 sys 0m0.209s 00:08:15.322 ************************************ 00:08:15.322 END TEST accel_crc32c 00:08:15.322 ************************************ 00:08:15.322 18:15:27 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:15.322 18:15:27 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:08:15.581 18:15:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:15.581 18:15:27 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:08:15.581 18:15:27 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:15.581 18:15:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:15.581 18:15:27 accel -- common/autotest_common.sh@10 -- # set +x 00:08:15.581 ************************************ 00:08:15.581 START TEST accel_crc32c_C2 00:08:15.581 ************************************ 00:08:15.581 18:15:27 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:08:15.581 18:15:27 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:08:15.581 18:15:27 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:08:15.581 18:15:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:15.581 18:15:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:15.581 18:15:27 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:08:15.581 18:15:27 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:08:15.581 18:15:27 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:08:15.581 18:15:27 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:15.581 18:15:27 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:15.581 18:15:27 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:15.581 18:15:27 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:15.581 18:15:27 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:15.581 18:15:27 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:08:15.581 18:15:27 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:08:15.581 [2024-07-22 18:15:27.403433] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:15.581 [2024-07-22 18:15:27.403591] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63003 ] 00:08:15.581 [2024-07-22 18:15:27.565527] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.840 [2024-07-22 18:15:27.798641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:16.099 18:15:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:18.002 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:18.002 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.002 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:18.002 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:18.002 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:18.002 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.002 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:18.002 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:18.002 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:18.002 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.002 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:18.002 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:18.002 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:18.002 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.002 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:18.002 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:18.002 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:18.002 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.002 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:18.002 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:18.002 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:18.002 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.002 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:18.002 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:18.002 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:18.002 ************************************ 00:08:18.002 END TEST accel_crc32c_C2 00:08:18.002 ************************************ 00:08:18.002 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:08:18.002 18:15:29 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:18.002 00:08:18.002 real 0m2.552s 00:08:18.002 user 0m2.269s 00:08:18.002 sys 0m0.188s 00:08:18.002 18:15:29 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:18.002 18:15:29 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:08:18.002 18:15:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:18.002 18:15:29 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:08:18.002 18:15:29 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:18.002 18:15:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:18.002 18:15:29 accel -- common/autotest_common.sh@10 -- # set +x 00:08:18.002 ************************************ 00:08:18.002 START TEST accel_copy 00:08:18.002 ************************************ 00:08:18.002 18:15:29 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:08:18.002 18:15:29 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:08:18.002 18:15:29 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:08:18.002 18:15:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.002 18:15:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.002 18:15:29 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:08:18.002 18:15:29 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:08:18.002 18:15:29 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:08:18.002 18:15:29 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:18.002 18:15:29 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:18.002 18:15:29 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:18.002 18:15:29 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:18.002 18:15:29 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:18.002 18:15:29 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:08:18.003 18:15:29 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:08:18.003 [2024-07-22 18:15:30.015815] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:18.003 [2024-07-22 18:15:30.015980] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63055 ] 00:08:18.261 [2024-07-22 18:15:30.191803] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.520 [2024-07-22 18:15:30.416403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.778 18:15:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:18.778 18:15:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.778 18:15:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.778 18:15:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.779 18:15:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:18.779 18:15:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.779 18:15:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.779 18:15:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.779 18:15:30 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:08:18.779 18:15:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.779 18:15:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.779 18:15:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.779 18:15:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:18.779 18:15:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.779 18:15:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.779 18:15:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.779 18:15:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:18.779 18:15:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.779 18:15:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.779 18:15:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.779 18:15:30 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:08:18.779 18:15:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.779 18:15:30 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:08:18.779 18:15:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.779 18:15:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.779 18:15:30 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:18.779 18:15:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.779 18:15:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.779 18:15:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.779 18:15:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:18.779 18:15:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.779 18:15:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.779 18:15:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.779 18:15:30 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:08:18.779 18:15:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.779 18:15:30 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:08:18.779 18:15:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.779 18:15:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.779 18:15:30 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:08:18.779 18:15:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.779 18:15:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.779 18:15:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.779 18:15:30 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:08:18.779 18:15:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.779 18:15:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.779 18:15:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.779 18:15:30 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:08:18.779 18:15:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.779 18:15:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.779 18:15:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.779 18:15:30 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:08:18.779 18:15:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.779 18:15:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.779 18:15:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.779 18:15:30 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:08:18.779 18:15:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.779 18:15:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.779 18:15:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.779 18:15:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:18.779 18:15:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.779 18:15:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.779 18:15:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:18.779 18:15:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:18.779 18:15:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:18.779 18:15:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:18.779 18:15:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:20.680 18:15:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:20.680 18:15:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:20.680 18:15:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:20.680 18:15:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:20.680 18:15:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:20.680 18:15:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:20.680 18:15:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:20.680 18:15:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:20.680 18:15:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:20.680 18:15:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:20.680 18:15:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:20.680 18:15:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:20.680 18:15:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:20.680 18:15:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:20.680 18:15:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:20.680 18:15:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:20.680 18:15:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:20.680 18:15:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:20.680 18:15:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:20.680 18:15:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:20.680 18:15:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:20.680 18:15:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:20.680 18:15:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:20.680 18:15:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:20.680 18:15:32 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:20.680 18:15:32 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:08:20.680 18:15:32 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:20.680 00:08:20.680 real 0m2.557s 00:08:20.680 user 0m2.262s 00:08:20.680 sys 0m0.198s 00:08:20.680 18:15:32 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:20.680 ************************************ 00:08:20.680 END TEST accel_copy 00:08:20.680 ************************************ 00:08:20.680 18:15:32 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:08:20.680 18:15:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:20.680 18:15:32 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:20.680 18:15:32 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:20.680 18:15:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:20.680 18:15:32 accel -- common/autotest_common.sh@10 -- # set +x 00:08:20.680 ************************************ 00:08:20.680 START TEST accel_fill 00:08:20.680 ************************************ 00:08:20.680 18:15:32 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:20.680 18:15:32 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:08:20.680 18:15:32 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:08:20.680 18:15:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:20.680 18:15:32 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:20.680 18:15:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:20.680 18:15:32 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:20.680 18:15:32 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:08:20.680 18:15:32 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:20.680 18:15:32 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:20.680 18:15:32 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:20.680 18:15:32 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:20.680 18:15:32 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:20.680 18:15:32 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:08:20.680 18:15:32 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:08:20.680 [2024-07-22 18:15:32.622726] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:20.680 [2024-07-22 18:15:32.622870] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63102 ] 00:08:20.938 [2024-07-22 18:15:32.797116] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.196 [2024-07-22 18:15:33.036860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:21.453 18:15:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:23.351 18:15:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:23.351 18:15:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:23.351 18:15:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:23.351 18:15:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:23.351 18:15:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:23.351 18:15:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:23.351 18:15:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:23.351 18:15:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:23.351 18:15:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:23.351 18:15:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:23.351 18:15:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:23.351 18:15:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:23.351 18:15:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:23.351 18:15:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:23.351 18:15:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:23.351 18:15:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:23.351 18:15:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:23.351 18:15:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:23.351 18:15:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:23.351 18:15:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:23.351 18:15:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:23.351 18:15:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:23.351 18:15:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:23.351 18:15:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:23.351 18:15:35 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:23.351 18:15:35 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:08:23.351 18:15:35 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:23.351 00:08:23.351 real 0m2.592s 00:08:23.351 user 0m2.292s 00:08:23.351 sys 0m0.202s 00:08:23.351 ************************************ 00:08:23.351 END TEST accel_fill 00:08:23.351 ************************************ 00:08:23.351 18:15:35 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:23.351 18:15:35 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:08:23.351 18:15:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:23.351 18:15:35 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:08:23.351 18:15:35 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:23.351 18:15:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:23.351 18:15:35 accel -- common/autotest_common.sh@10 -- # set +x 00:08:23.351 ************************************ 00:08:23.351 START TEST accel_copy_crc32c 00:08:23.351 ************************************ 00:08:23.351 18:15:35 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:08:23.351 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:08:23.351 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:08:23.351 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:23.351 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:23.351 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:08:23.351 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:08:23.351 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:08:23.351 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:23.352 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:23.352 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:23.352 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:23.352 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:23.352 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:08:23.352 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:08:23.352 [2024-07-22 18:15:35.260300] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:23.352 [2024-07-22 18:15:35.260473] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63148 ] 00:08:23.647 [2024-07-22 18:15:35.434028] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.946 [2024-07-22 18:15:35.670074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:23.946 18:15:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:25.845 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:25.845 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:25.846 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:25.846 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:25.846 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:25.846 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:25.846 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:25.846 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:25.846 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:25.846 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:25.846 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:25.846 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:25.846 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:25.846 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:25.846 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:25.846 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:25.846 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:25.846 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:25.846 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:25.846 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:25.846 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:25.846 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:25.846 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:25.846 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:25.846 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:25.846 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:08:25.846 18:15:37 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:25.846 00:08:25.846 real 0m2.542s 00:08:25.846 user 0m2.251s 00:08:25.846 sys 0m0.196s 00:08:25.846 18:15:37 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:25.846 18:15:37 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:08:25.846 ************************************ 00:08:25.846 END TEST accel_copy_crc32c 00:08:25.846 ************************************ 00:08:25.846 18:15:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:25.846 18:15:37 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:08:25.846 18:15:37 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:25.846 18:15:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:25.846 18:15:37 accel -- common/autotest_common.sh@10 -- # set +x 00:08:25.846 ************************************ 00:08:25.846 START TEST accel_copy_crc32c_C2 00:08:25.846 ************************************ 00:08:25.846 18:15:37 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:08:25.846 18:15:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:08:25.846 18:15:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:08:25.846 18:15:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:25.846 18:15:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:08:25.846 18:15:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:25.846 18:15:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:08:25.846 18:15:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:08:25.846 18:15:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:25.846 18:15:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:25.846 18:15:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:25.846 18:15:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:25.846 18:15:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:25.846 18:15:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:08:25.846 18:15:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:08:25.846 [2024-07-22 18:15:37.854579] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:25.846 [2024-07-22 18:15:37.854746] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63195 ] 00:08:26.104 [2024-07-22 18:15:38.030196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.362 [2024-07-22 18:15:38.361171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.620 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:26.620 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.620 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:26.620 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:26.620 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:26.621 18:15:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:28.522 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:28.522 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.522 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:28.522 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:28.522 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:28.522 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.522 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:28.522 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:28.522 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:28.522 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.522 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:28.522 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:28.522 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:28.522 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.522 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:28.522 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:28.522 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:28.522 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.522 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:28.522 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:28.522 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:28.522 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.522 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:28.522 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:28.522 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:28.522 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:08:28.522 18:15:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:28.522 00:08:28.522 real 0m2.707s 00:08:28.522 user 0m2.397s 00:08:28.522 sys 0m0.207s 00:08:28.522 18:15:40 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:28.522 18:15:40 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:08:28.522 ************************************ 00:08:28.522 END TEST accel_copy_crc32c_C2 00:08:28.522 ************************************ 00:08:28.789 18:15:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:28.789 18:15:40 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:08:28.789 18:15:40 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:28.789 18:15:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:28.789 18:15:40 accel -- common/autotest_common.sh@10 -- # set +x 00:08:28.789 ************************************ 00:08:28.789 START TEST accel_dualcast 00:08:28.789 ************************************ 00:08:28.789 18:15:40 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:08:28.789 18:15:40 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:08:28.789 18:15:40 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:08:28.789 18:15:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:28.789 18:15:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:28.789 18:15:40 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:08:28.789 18:15:40 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:08:28.789 18:15:40 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:08:28.789 18:15:40 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:28.789 18:15:40 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:28.789 18:15:40 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:28.789 18:15:40 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:28.789 18:15:40 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:28.789 18:15:40 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:08:28.789 18:15:40 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:08:28.789 [2024-07-22 18:15:40.635952] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:28.789 [2024-07-22 18:15:40.636201] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63241 ] 00:08:29.047 [2024-07-22 18:15:40.814235] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.047 [2024-07-22 18:15:41.049378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.305 18:15:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:29.305 18:15:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:29.305 18:15:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:29.305 18:15:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:29.305 18:15:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:29.305 18:15:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:29.305 18:15:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:29.305 18:15:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:29.305 18:15:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:08:29.305 18:15:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:29.305 18:15:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:29.305 18:15:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:29.305 18:15:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:29.305 18:15:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:29.305 18:15:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:29.305 18:15:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:29.305 18:15:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:29.305 18:15:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:29.305 18:15:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:29.306 18:15:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:29.306 18:15:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:08:29.306 18:15:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:29.306 18:15:41 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:08:29.306 18:15:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:29.306 18:15:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:29.306 18:15:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:29.306 18:15:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:29.306 18:15:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:29.306 18:15:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:29.306 18:15:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:29.306 18:15:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:29.306 18:15:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:29.306 18:15:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:29.306 18:15:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:08:29.306 18:15:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:29.306 18:15:41 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:08:29.306 18:15:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:29.306 18:15:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:29.306 18:15:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:08:29.306 18:15:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:29.306 18:15:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:29.306 18:15:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:29.306 18:15:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:08:29.306 18:15:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:29.306 18:15:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:29.306 18:15:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:29.306 18:15:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:08:29.306 18:15:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:29.306 18:15:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:29.306 18:15:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:29.306 18:15:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:08:29.306 18:15:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:29.306 18:15:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:29.306 18:15:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:29.306 18:15:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:08:29.306 18:15:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:29.306 18:15:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:29.306 18:15:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:29.306 18:15:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:29.306 18:15:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:29.306 18:15:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:29.306 18:15:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:29.306 18:15:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:29.306 18:15:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:29.306 18:15:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:29.306 18:15:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:31.206 18:15:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:31.206 18:15:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:31.206 18:15:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:31.206 18:15:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:31.206 18:15:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:31.206 18:15:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:31.206 18:15:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:31.206 18:15:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:31.206 18:15:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:31.206 18:15:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:31.206 18:15:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:31.206 18:15:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:31.206 18:15:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:31.206 18:15:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:31.206 18:15:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:31.206 18:15:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:31.206 18:15:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:31.206 18:15:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:31.206 18:15:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:31.206 18:15:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:31.206 18:15:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:31.206 18:15:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:31.206 18:15:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:31.206 18:15:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:31.206 18:15:43 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:31.206 18:15:43 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:08:31.206 18:15:43 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:31.206 00:08:31.206 real 0m2.586s 00:08:31.206 user 0m2.280s 00:08:31.206 sys 0m0.209s 00:08:31.206 18:15:43 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:31.206 18:15:43 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:08:31.206 ************************************ 00:08:31.206 END TEST accel_dualcast 00:08:31.206 ************************************ 00:08:31.206 18:15:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:31.206 18:15:43 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:08:31.206 18:15:43 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:31.206 18:15:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:31.206 18:15:43 accel -- common/autotest_common.sh@10 -- # set +x 00:08:31.206 ************************************ 00:08:31.206 START TEST accel_compare 00:08:31.206 ************************************ 00:08:31.206 18:15:43 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:08:31.206 18:15:43 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:08:31.206 18:15:43 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:08:31.206 18:15:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:31.206 18:15:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:31.206 18:15:43 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:08:31.206 18:15:43 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:08:31.206 18:15:43 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:08:31.206 18:15:43 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:31.206 18:15:43 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:31.206 18:15:43 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:31.206 18:15:43 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:31.206 18:15:43 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:31.206 18:15:43 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:08:31.206 18:15:43 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:08:31.484 [2024-07-22 18:15:43.248964] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:31.484 [2024-07-22 18:15:43.249185] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63288 ] 00:08:31.484 [2024-07-22 18:15:43.425452] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.745 [2024-07-22 18:15:43.660928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:32.004 18:15:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:33.906 18:15:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:33.906 18:15:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:33.906 18:15:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:33.906 18:15:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:33.906 18:15:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:33.906 18:15:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:33.906 18:15:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:33.906 18:15:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:33.906 18:15:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:33.906 18:15:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:33.906 18:15:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:33.906 18:15:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:33.906 18:15:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:33.906 18:15:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:33.906 18:15:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:33.906 18:15:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:33.906 18:15:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:33.906 18:15:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:33.906 18:15:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:33.906 18:15:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:33.906 18:15:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:33.906 18:15:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:33.906 18:15:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:33.906 18:15:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:33.906 18:15:45 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:33.906 18:15:45 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:08:33.906 18:15:45 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:33.906 00:08:33.906 real 0m2.558s 00:08:33.906 user 0m2.251s 00:08:33.906 sys 0m0.206s 00:08:33.906 18:15:45 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:33.906 ************************************ 00:08:33.906 END TEST accel_compare 00:08:33.906 18:15:45 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:08:33.906 ************************************ 00:08:33.906 18:15:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:33.906 18:15:45 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:08:33.906 18:15:45 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:33.906 18:15:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:33.906 18:15:45 accel -- common/autotest_common.sh@10 -- # set +x 00:08:33.906 ************************************ 00:08:33.906 START TEST accel_xor 00:08:33.906 ************************************ 00:08:33.906 18:15:45 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:08:33.906 18:15:45 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:33.906 18:15:45 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:33.906 18:15:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:33.906 18:15:45 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:08:33.906 18:15:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:33.906 18:15:45 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:08:33.906 18:15:45 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:33.906 18:15:45 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:33.906 18:15:45 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:33.906 18:15:45 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:33.906 18:15:45 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:33.906 18:15:45 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:33.906 18:15:45 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:33.906 18:15:45 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:33.906 [2024-07-22 18:15:45.857247] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:33.906 [2024-07-22 18:15:45.857511] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63340 ] 00:08:34.165 [2024-07-22 18:15:46.033196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.423 [2024-07-22 18:15:46.261752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:34.682 18:15:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:36.584 18:15:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:36.584 18:15:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:36.584 18:15:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:36.584 18:15:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:36.584 18:15:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:36.584 18:15:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:36.584 18:15:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:36.584 18:15:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:36.584 18:15:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:36.584 18:15:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:36.584 18:15:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:36.584 18:15:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:36.584 18:15:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:36.584 18:15:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:36.584 18:15:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:36.584 18:15:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:36.584 18:15:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:36.584 18:15:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:36.584 18:15:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:36.584 18:15:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:36.584 18:15:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:36.584 18:15:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:36.584 18:15:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:36.584 18:15:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:36.584 18:15:48 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:36.584 18:15:48 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:36.584 18:15:48 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:36.584 00:08:36.584 real 0m2.547s 00:08:36.584 user 0m2.252s 00:08:36.584 sys 0m0.199s 00:08:36.584 18:15:48 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:36.584 18:15:48 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:36.584 ************************************ 00:08:36.584 END TEST accel_xor 00:08:36.584 ************************************ 00:08:36.584 18:15:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:36.584 18:15:48 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:08:36.584 18:15:48 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:36.584 18:15:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:36.584 18:15:48 accel -- common/autotest_common.sh@10 -- # set +x 00:08:36.584 ************************************ 00:08:36.584 START TEST accel_xor 00:08:36.584 ************************************ 00:08:36.584 18:15:48 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:08:36.584 18:15:48 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:36.584 18:15:48 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:36.584 18:15:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:36.584 18:15:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:36.584 18:15:48 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:08:36.584 18:15:48 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:08:36.584 18:15:48 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:36.584 18:15:48 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:36.584 18:15:48 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:36.584 18:15:48 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:36.584 18:15:48 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:36.584 18:15:48 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:36.584 18:15:48 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:36.584 18:15:48 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:36.584 [2024-07-22 18:15:48.459823] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:36.584 [2024-07-22 18:15:48.460009] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63381 ] 00:08:36.842 [2024-07-22 18:15:48.634831] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.101 [2024-07-22 18:15:48.870542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:37.101 18:15:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:39.001 18:15:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:39.001 18:15:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:39.001 18:15:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:39.001 18:15:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:39.001 18:15:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:39.001 18:15:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:39.001 18:15:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:39.001 18:15:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:39.001 18:15:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:39.001 18:15:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:39.001 18:15:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:39.001 18:15:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:39.001 18:15:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:39.001 18:15:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:39.002 18:15:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:39.002 18:15:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:39.002 18:15:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:39.002 18:15:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:39.002 18:15:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:39.002 18:15:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:39.002 18:15:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:39.002 18:15:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:39.002 18:15:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:39.002 18:15:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:39.002 18:15:50 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:39.002 18:15:50 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:39.002 18:15:50 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:39.002 00:08:39.002 real 0m2.549s 00:08:39.002 user 0m2.256s 00:08:39.002 sys 0m0.194s 00:08:39.002 18:15:50 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:39.002 ************************************ 00:08:39.002 END TEST accel_xor 00:08:39.002 ************************************ 00:08:39.002 18:15:50 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:39.002 18:15:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:39.002 18:15:50 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:08:39.002 18:15:50 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:39.002 18:15:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:39.002 18:15:50 accel -- common/autotest_common.sh@10 -- # set +x 00:08:39.002 ************************************ 00:08:39.002 START TEST accel_dif_verify 00:08:39.002 ************************************ 00:08:39.002 18:15:51 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:08:39.002 18:15:51 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:08:39.002 18:15:51 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:08:39.002 18:15:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:39.002 18:15:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:39.002 18:15:51 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:08:39.002 18:15:51 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:08:39.002 18:15:51 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:08:39.002 18:15:51 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:39.002 18:15:51 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:39.002 18:15:51 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:39.002 18:15:51 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:39.002 18:15:51 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:39.002 18:15:51 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:08:39.002 18:15:51 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:08:39.260 [2024-07-22 18:15:51.056795] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:39.260 [2024-07-22 18:15:51.057017] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63433 ] 00:08:39.260 [2024-07-22 18:15:51.230182] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.519 [2024-07-22 18:15:51.467992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:39.778 18:15:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:41.681 18:15:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:41.681 18:15:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:41.681 18:15:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:41.681 18:15:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:41.681 18:15:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:41.681 18:15:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:41.681 18:15:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:41.681 18:15:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:41.681 18:15:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:41.681 18:15:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:41.681 18:15:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:41.681 18:15:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:41.681 18:15:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:41.681 18:15:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:41.681 18:15:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:41.681 18:15:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:41.681 18:15:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:41.681 18:15:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:41.681 18:15:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:41.681 18:15:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:41.681 18:15:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:41.681 18:15:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:41.681 18:15:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:41.681 18:15:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:41.681 18:15:53 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:41.681 18:15:53 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:08:41.681 18:15:53 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:41.681 00:08:41.681 real 0m2.548s 00:08:41.681 user 0m2.245s 00:08:41.681 sys 0m0.209s 00:08:41.681 18:15:53 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:41.681 ************************************ 00:08:41.681 END TEST accel_dif_verify 00:08:41.681 ************************************ 00:08:41.681 18:15:53 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:08:41.681 18:15:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:41.681 18:15:53 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:08:41.681 18:15:53 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:41.681 18:15:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:41.681 18:15:53 accel -- common/autotest_common.sh@10 -- # set +x 00:08:41.681 ************************************ 00:08:41.681 START TEST accel_dif_generate 00:08:41.681 ************************************ 00:08:41.681 18:15:53 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:08:41.681 18:15:53 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:08:41.681 18:15:53 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:08:41.681 18:15:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:41.681 18:15:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:41.681 18:15:53 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:08:41.681 18:15:53 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:08:41.681 18:15:53 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:08:41.681 18:15:53 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:41.681 18:15:53 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:41.681 18:15:53 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:41.682 18:15:53 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:41.682 18:15:53 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:41.682 18:15:53 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:08:41.682 18:15:53 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:08:41.682 [2024-07-22 18:15:53.642923] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:41.682 [2024-07-22 18:15:53.643069] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63474 ] 00:08:41.941 [2024-07-22 18:15:53.804351] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.199 [2024-07-22 18:15:54.056621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:42.458 18:15:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:44.357 18:15:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:44.357 18:15:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:44.357 18:15:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:44.357 18:15:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:44.357 18:15:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:44.357 18:15:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:44.357 18:15:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:44.357 18:15:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:44.357 18:15:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:44.357 18:15:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:44.357 18:15:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:44.357 18:15:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:44.357 18:15:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:44.357 18:15:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:44.357 18:15:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:44.357 18:15:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:44.357 18:15:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:44.357 18:15:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:44.357 18:15:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:44.357 18:15:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:44.357 18:15:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:44.357 18:15:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:44.357 18:15:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:44.357 18:15:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:44.357 18:15:56 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:44.357 18:15:56 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:08:44.357 18:15:56 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:44.357 00:08:44.357 real 0m2.606s 00:08:44.357 user 0m2.316s 00:08:44.357 sys 0m0.190s 00:08:44.357 18:15:56 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:44.357 18:15:56 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:08:44.357 ************************************ 00:08:44.357 END TEST accel_dif_generate 00:08:44.357 ************************************ 00:08:44.357 18:15:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:44.357 18:15:56 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:08:44.357 18:15:56 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:44.357 18:15:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:44.357 18:15:56 accel -- common/autotest_common.sh@10 -- # set +x 00:08:44.357 ************************************ 00:08:44.357 START TEST accel_dif_generate_copy 00:08:44.357 ************************************ 00:08:44.357 18:15:56 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:08:44.357 18:15:56 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:08:44.357 18:15:56 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:08:44.357 18:15:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:44.357 18:15:56 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:08:44.357 18:15:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:44.357 18:15:56 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:44.357 18:15:56 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:08:44.357 18:15:56 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:44.357 18:15:56 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:44.357 18:15:56 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:44.357 18:15:56 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:44.357 18:15:56 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:44.357 18:15:56 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:08:44.357 18:15:56 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:08:44.357 [2024-07-22 18:15:56.313595] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:44.357 [2024-07-22 18:15:56.314589] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63525 ] 00:08:44.615 [2024-07-22 18:15:56.487298] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.874 [2024-07-22 18:15:56.785889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.132 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:45.132 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:45.132 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:45.132 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:45.132 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:45.132 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:45.132 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:45.132 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:45.132 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:08:45.132 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:45.132 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:45.132 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:45.132 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:45.132 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:45.132 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:45.132 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:45.132 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:45.132 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:45.132 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:45.132 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:45.132 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:08:45.132 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:45.132 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:08:45.132 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:45.132 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:45.132 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:45.132 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:45.132 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:45.132 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:45.133 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:45.133 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:45.133 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:45.133 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:45.133 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:45.133 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:45.133 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:45.133 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:45.133 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:08:45.133 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:45.133 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:08:45.133 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:45.133 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:45.133 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:45.133 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:45.133 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:45.133 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:45.133 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:45.133 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:45.133 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:45.133 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:45.133 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:08:45.133 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:45.133 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:45.133 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:45.133 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:08:45.133 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:45.133 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:45.133 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:45.133 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:08:45.133 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:45.133 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:45.133 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:45.133 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:45.133 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:45.133 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:45.133 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:45.133 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:45.133 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:45.133 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:45.133 18:15:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:47.037 18:15:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:47.037 18:15:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:47.037 18:15:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:47.037 18:15:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:47.037 18:15:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:47.037 18:15:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:47.037 18:15:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:47.037 18:15:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:47.037 18:15:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:47.037 18:15:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:47.037 18:15:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:47.037 18:15:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:47.037 18:15:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:47.037 18:15:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:47.037 18:15:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:47.037 18:15:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:47.037 18:15:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:47.037 18:15:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:47.037 18:15:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:47.037 18:15:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:47.037 18:15:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:47.037 18:15:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:47.037 18:15:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:47.037 18:15:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:47.037 18:15:58 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:47.037 18:15:58 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:08:47.037 18:15:58 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:47.037 00:08:47.037 real 0m2.710s 00:08:47.037 user 0m2.404s 00:08:47.037 sys 0m0.207s 00:08:47.037 18:15:58 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:47.037 ************************************ 00:08:47.037 END TEST accel_dif_generate_copy 00:08:47.037 ************************************ 00:08:47.037 18:15:58 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:08:47.037 18:15:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:47.037 18:15:59 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:08:47.037 18:15:59 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:47.037 18:15:59 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:08:47.037 18:15:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:47.037 18:15:59 accel -- common/autotest_common.sh@10 -- # set +x 00:08:47.037 ************************************ 00:08:47.037 START TEST accel_comp 00:08:47.037 ************************************ 00:08:47.037 18:15:59 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:47.037 18:15:59 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:08:47.037 18:15:59 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:08:47.037 18:15:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:47.037 18:15:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:47.037 18:15:59 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:47.037 18:15:59 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:47.037 18:15:59 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:08:47.037 18:15:59 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:47.037 18:15:59 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:47.037 18:15:59 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:47.037 18:15:59 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:47.037 18:15:59 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:47.037 18:15:59 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:08:47.037 18:15:59 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:08:47.295 [2024-07-22 18:15:59.074150] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:47.295 [2024-07-22 18:15:59.074344] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63567 ] 00:08:47.295 [2024-07-22 18:15:59.251777] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.552 [2024-07-22 18:15:59.492491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:47.813 18:15:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:47.814 18:15:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:49.714 18:16:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:49.714 18:16:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:49.714 18:16:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:49.714 18:16:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:49.714 18:16:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:49.714 18:16:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:49.714 18:16:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:49.714 18:16:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:49.714 18:16:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:49.714 18:16:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:49.714 18:16:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:49.714 18:16:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:49.714 18:16:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:49.714 18:16:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:49.714 18:16:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:49.714 18:16:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:49.714 18:16:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:49.714 18:16:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:49.714 18:16:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:49.714 18:16:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:49.714 18:16:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:49.714 18:16:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:49.715 18:16:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:49.715 18:16:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:49.715 18:16:01 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:49.715 18:16:01 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:08:49.715 ************************************ 00:08:49.715 END TEST accel_comp 00:08:49.715 ************************************ 00:08:49.715 18:16:01 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:49.715 00:08:49.715 real 0m2.593s 00:08:49.715 user 0m0.012s 00:08:49.715 sys 0m0.004s 00:08:49.715 18:16:01 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:49.715 18:16:01 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:08:49.715 18:16:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:49.715 18:16:01 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:49.715 18:16:01 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:49.715 18:16:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:49.715 18:16:01 accel -- common/autotest_common.sh@10 -- # set +x 00:08:49.715 ************************************ 00:08:49.715 START TEST accel_decomp 00:08:49.715 ************************************ 00:08:49.715 18:16:01 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:49.715 18:16:01 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:08:49.715 18:16:01 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:08:49.715 18:16:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:49.715 18:16:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:49.715 18:16:01 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:49.715 18:16:01 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:49.715 18:16:01 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:08:49.715 18:16:01 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:49.715 18:16:01 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:49.715 18:16:01 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:49.715 18:16:01 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:49.715 18:16:01 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:49.715 18:16:01 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:08:49.715 18:16:01 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:08:49.715 [2024-07-22 18:16:01.703942] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:49.715 [2024-07-22 18:16:01.704868] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63619 ] 00:08:49.973 [2024-07-22 18:16:01.867236] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.231 [2024-07-22 18:16:02.103556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:50.490 18:16:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:52.392 18:16:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:52.392 18:16:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:52.392 18:16:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:52.392 18:16:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:52.392 18:16:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:52.392 18:16:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:52.392 18:16:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:52.392 18:16:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:52.392 18:16:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:52.392 18:16:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:52.392 18:16:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:52.393 18:16:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:52.393 18:16:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:52.393 18:16:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:52.393 18:16:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:52.393 18:16:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:52.393 18:16:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:52.393 18:16:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:52.393 18:16:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:52.393 18:16:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:52.393 18:16:04 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:52.393 18:16:04 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:52.393 18:16:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:52.393 18:16:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:52.393 18:16:04 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:52.393 18:16:04 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:52.393 18:16:04 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:52.393 00:08:52.393 real 0m2.550s 00:08:52.393 user 0m2.261s 00:08:52.393 sys 0m0.192s 00:08:52.393 18:16:04 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:52.393 ************************************ 00:08:52.393 18:16:04 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:08:52.393 END TEST accel_decomp 00:08:52.393 ************************************ 00:08:52.393 18:16:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:52.393 18:16:04 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:52.393 18:16:04 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:52.393 18:16:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:52.393 18:16:04 accel -- common/autotest_common.sh@10 -- # set +x 00:08:52.393 ************************************ 00:08:52.393 START TEST accel_decomp_full 00:08:52.393 ************************************ 00:08:52.393 18:16:04 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:52.393 18:16:04 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:08:52.393 18:16:04 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:08:52.393 18:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:52.393 18:16:04 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:52.393 18:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:52.393 18:16:04 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:52.393 18:16:04 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:08:52.393 18:16:04 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:52.393 18:16:04 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:52.393 18:16:04 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:52.393 18:16:04 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:52.393 18:16:04 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:52.393 18:16:04 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:08:52.393 18:16:04 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:08:52.393 [2024-07-22 18:16:04.310739] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:52.393 [2024-07-22 18:16:04.310934] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63666 ] 00:08:52.651 [2024-07-22 18:16:04.489899] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.910 [2024-07-22 18:16:04.735231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.168 18:16:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:53.168 18:16:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:53.168 18:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:53.168 18:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:53.168 18:16:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:53.168 18:16:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:53.168 18:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:53.168 18:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:53.168 18:16:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:53.168 18:16:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:53.168 18:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:53.168 18:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:53.168 18:16:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:08:53.168 18:16:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:53.168 18:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:53.168 18:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:53.168 18:16:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:53.169 18:16:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:53.169 18:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:53.169 18:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:53.169 18:16:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:53.169 18:16:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:53.169 18:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:53.169 18:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:53.169 18:16:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:08:53.169 18:16:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:53.169 18:16:04 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:53.169 18:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:53.169 18:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:53.169 18:16:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:53.169 18:16:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:53.169 18:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:53.169 18:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:53.169 18:16:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:53.169 18:16:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:53.169 18:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:53.169 18:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:53.169 18:16:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:08:53.169 18:16:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:53.169 18:16:04 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:08:53.169 18:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:53.169 18:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:53.169 18:16:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:53.169 18:16:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:53.169 18:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:53.169 18:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:53.169 18:16:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:53.169 18:16:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:53.169 18:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:53.169 18:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:53.169 18:16:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:53.169 18:16:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:53.169 18:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:53.169 18:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:53.169 18:16:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:08:53.169 18:16:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:53.169 18:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:53.169 18:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:53.169 18:16:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:08:53.169 18:16:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:53.169 18:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:53.169 18:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:53.169 18:16:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:08:53.169 18:16:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:53.169 18:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:53.169 18:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:53.169 18:16:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:53.169 18:16:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:53.169 18:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:53.169 18:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:53.169 18:16:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:53.169 18:16:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:53.169 18:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:53.169 18:16:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:55.071 18:16:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:55.071 18:16:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:55.071 18:16:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:55.071 18:16:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:55.071 18:16:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:55.071 18:16:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:55.071 18:16:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:55.071 18:16:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:55.071 18:16:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:55.071 18:16:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:55.071 18:16:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:55.071 18:16:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:55.071 18:16:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:55.071 18:16:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:55.071 18:16:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:55.071 18:16:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:55.071 18:16:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:55.071 18:16:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:55.071 18:16:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:55.071 18:16:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:55.071 18:16:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:55.071 18:16:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:55.071 18:16:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:55.071 18:16:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:55.071 18:16:06 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:55.071 18:16:06 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:55.071 18:16:06 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:55.071 00:08:55.071 real 0m2.577s 00:08:55.071 user 0m2.272s 00:08:55.071 sys 0m0.211s 00:08:55.071 18:16:06 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:55.071 ************************************ 00:08:55.071 END TEST accel_decomp_full 00:08:55.071 ************************************ 00:08:55.071 18:16:06 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:08:55.071 18:16:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:55.071 18:16:06 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:55.071 18:16:06 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:55.071 18:16:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:55.071 18:16:06 accel -- common/autotest_common.sh@10 -- # set +x 00:08:55.071 ************************************ 00:08:55.071 START TEST accel_decomp_mcore 00:08:55.071 ************************************ 00:08:55.071 18:16:06 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:55.071 18:16:06 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:55.071 18:16:06 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:55.071 18:16:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.071 18:16:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.071 18:16:06 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:55.071 18:16:06 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:55.071 18:16:06 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:55.071 18:16:06 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:55.071 18:16:06 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:55.071 18:16:06 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:55.071 18:16:06 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:55.071 18:16:06 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:55.071 18:16:06 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:55.071 18:16:06 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:55.071 [2024-07-22 18:16:06.945543] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:55.071 [2024-07-22 18:16:06.945713] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63707 ] 00:08:55.330 [2024-07-22 18:16:07.118277] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:55.588 [2024-07-22 18:16:07.356649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:55.588 [2024-07-22 18:16:07.356809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:55.588 [2024-07-22 18:16:07.357107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.588 [2024-07-22 18:16:07.356963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:55.588 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:55.588 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.588 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.588 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.588 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:55.588 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.588 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.588 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.588 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:55.588 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.588 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.588 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.588 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:55.588 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.588 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.588 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.588 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:55.588 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.588 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.588 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.588 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:55.589 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.589 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.589 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.589 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:55.589 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.589 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:55.589 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.589 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.589 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:55.589 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.589 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.589 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.589 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:55.589 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.589 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.589 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.589 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:08:55.589 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.589 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:55.589 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.589 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.589 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:55.589 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.589 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.589 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.589 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:55.589 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.589 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.589 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.589 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:55.589 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.589 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.589 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.589 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:08:55.589 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.589 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.589 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.589 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:55.589 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.589 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.589 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.589 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:55.589 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.589 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.589 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.589 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:55.589 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.589 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.589 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:55.589 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:55.589 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:55.589 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:55.589 18:16:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.119 18:16:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:58.119 18:16:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.119 18:16:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.119 18:16:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.119 18:16:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:58.119 18:16:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.119 18:16:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.119 18:16:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.119 18:16:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:58.119 18:16:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.119 18:16:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.119 18:16:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.119 18:16:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:58.119 18:16:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.119 18:16:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.119 18:16:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.119 18:16:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:58.119 18:16:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.119 18:16:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.119 18:16:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.119 18:16:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:58.119 18:16:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.119 18:16:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.119 18:16:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.119 18:16:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:58.119 18:16:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.119 18:16:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.119 18:16:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.119 18:16:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:58.119 18:16:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.119 18:16:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.119 18:16:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.119 18:16:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:58.119 18:16:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.119 18:16:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.119 18:16:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.119 ************************************ 00:08:58.119 END TEST accel_decomp_mcore 00:08:58.119 ************************************ 00:08:58.119 18:16:09 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:58.119 18:16:09 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:58.119 18:16:09 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:58.119 00:08:58.119 real 0m2.680s 00:08:58.119 user 0m0.017s 00:08:58.119 sys 0m0.003s 00:08:58.119 18:16:09 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:58.119 18:16:09 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:58.119 18:16:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:58.119 18:16:09 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:58.119 18:16:09 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:58.119 18:16:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:58.119 18:16:09 accel -- common/autotest_common.sh@10 -- # set +x 00:08:58.119 ************************************ 00:08:58.119 START TEST accel_decomp_full_mcore 00:08:58.119 ************************************ 00:08:58.119 18:16:09 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:58.119 18:16:09 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:58.119 18:16:09 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:58.119 18:16:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.119 18:16:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.119 18:16:09 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:58.119 18:16:09 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:58.119 18:16:09 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:58.119 18:16:09 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:58.119 18:16:09 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:58.119 18:16:09 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:58.119 18:16:09 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:58.119 18:16:09 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:58.119 18:16:09 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:58.119 18:16:09 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:58.119 [2024-07-22 18:16:09.672366] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:58.119 [2024-07-22 18:16:09.672555] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63762 ] 00:08:58.119 [2024-07-22 18:16:09.847564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:58.119 [2024-07-22 18:16:10.087006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:58.119 [2024-07-22 18:16:10.087128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:58.119 [2024-07-22 18:16:10.087276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:58.119 [2024-07-22 18:16:10.087416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.377 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.378 18:16:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:00.279 18:16:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:00.279 18:16:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:00.279 18:16:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:00.279 18:16:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:00.279 18:16:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:00.279 18:16:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:00.279 18:16:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:00.279 18:16:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:00.279 18:16:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:00.279 18:16:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:00.279 18:16:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:00.279 18:16:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:00.279 18:16:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:00.279 18:16:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:00.279 18:16:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:00.279 18:16:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:00.279 18:16:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:00.279 18:16:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:00.279 18:16:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:00.279 18:16:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:00.279 18:16:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:00.279 18:16:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:00.279 18:16:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:00.279 18:16:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:00.279 18:16:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:00.279 18:16:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:00.279 18:16:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:00.279 18:16:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:00.279 18:16:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:00.279 18:16:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:00.279 18:16:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:00.279 18:16:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:00.279 18:16:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:00.279 18:16:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:00.279 18:16:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:00.279 18:16:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:00.279 18:16:12 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:00.279 18:16:12 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:00.279 18:16:12 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:00.279 00:09:00.279 real 0m2.617s 00:09:00.279 user 0m0.015s 00:09:00.279 sys 0m0.005s 00:09:00.279 18:16:12 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:00.279 18:16:12 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:09:00.279 ************************************ 00:09:00.279 END TEST accel_decomp_full_mcore 00:09:00.279 ************************************ 00:09:00.279 18:16:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:00.279 18:16:12 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:09:00.279 18:16:12 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:09:00.279 18:16:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:00.279 18:16:12 accel -- common/autotest_common.sh@10 -- # set +x 00:09:00.279 ************************************ 00:09:00.279 START TEST accel_decomp_mthread 00:09:00.279 ************************************ 00:09:00.279 18:16:12 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:09:00.279 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:09:00.279 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:09:00.279 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:00.279 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:00.279 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:09:00.279 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:09:00.279 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:09:00.279 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:00.279 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:00.279 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:00.279 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:00.279 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:00.279 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:09:00.279 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:09:00.538 [2024-07-22 18:16:12.341682] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:00.538 [2024-07-22 18:16:12.341875] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63810 ] 00:09:00.538 [2024-07-22 18:16:12.517038] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.796 [2024-07-22 18:16:12.749242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.054 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:01.054 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:01.054 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:01.054 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:01.054 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:01.054 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:01.054 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:01.054 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:01.054 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:01.054 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:01.054 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:01.054 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:01.054 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:09:01.054 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:01.054 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:01.054 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:01.054 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:01.054 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:01.054 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:01.054 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:01.054 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:01.054 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:01.055 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:01.055 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:01.055 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:09:01.055 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:01.055 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:01.055 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:01.055 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:01.055 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:01.055 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:01.055 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:01.055 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:01.055 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:01.055 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:01.055 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:01.055 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:01.055 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:09:01.055 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:01.055 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:09:01.055 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:01.055 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:01.055 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:01.055 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:01.055 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:01.055 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:01.055 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:09:01.055 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:01.055 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:01.055 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:01.055 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:09:01.055 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:01.055 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:01.055 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:01.055 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:09:01.055 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:01.055 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:01.055 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:01.055 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:09:01.055 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:01.055 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:01.055 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:01.055 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:09:01.055 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:01.055 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:01.055 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:01.055 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:01.055 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:01.055 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:01.055 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:01.055 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:01.055 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:01.055 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:01.055 18:16:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:02.955 18:16:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:02.955 18:16:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:02.955 18:16:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:02.955 18:16:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:02.955 18:16:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:02.956 18:16:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:02.956 18:16:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:02.956 18:16:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:02.956 18:16:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:02.956 18:16:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:02.956 18:16:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:02.956 18:16:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:02.956 18:16:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:02.956 18:16:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:02.956 18:16:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:02.956 18:16:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:02.956 18:16:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:02.956 18:16:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:02.956 18:16:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:02.956 18:16:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:02.956 18:16:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:02.956 18:16:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:02.956 18:16:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:02.956 18:16:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:02.956 18:16:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:02.956 18:16:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:02.956 18:16:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:02.956 18:16:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:02.956 18:16:14 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:02.956 18:16:14 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:02.956 18:16:14 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:02.956 00:09:02.956 real 0m2.626s 00:09:02.956 user 0m2.315s 00:09:02.956 sys 0m0.214s 00:09:02.956 ************************************ 00:09:02.956 END TEST accel_decomp_mthread 00:09:02.956 ************************************ 00:09:02.956 18:16:14 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:02.956 18:16:14 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:09:02.956 18:16:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:02.956 18:16:14 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:09:02.956 18:16:14 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:09:02.956 18:16:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:02.956 18:16:14 accel -- common/autotest_common.sh@10 -- # set +x 00:09:02.956 ************************************ 00:09:02.956 START TEST accel_decomp_full_mthread 00:09:02.956 ************************************ 00:09:02.956 18:16:14 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:09:02.956 18:16:14 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:09:02.956 18:16:14 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:09:02.956 18:16:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:02.956 18:16:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:02.956 18:16:14 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:09:02.956 18:16:14 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:09:02.956 18:16:14 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:09:02.956 18:16:14 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:02.956 18:16:14 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:02.956 18:16:14 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:02.956 18:16:14 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:02.956 18:16:14 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:02.956 18:16:14 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:09:02.956 18:16:14 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:09:03.214 [2024-07-22 18:16:15.017705] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:03.214 [2024-07-22 18:16:15.017857] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63858 ] 00:09:03.214 [2024-07-22 18:16:15.186737] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.472 [2024-07-22 18:16:15.438194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.731 18:16:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:05.634 18:16:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:05.634 18:16:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:05.634 18:16:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:05.634 18:16:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:05.634 18:16:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:05.634 18:16:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:05.634 18:16:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:05.634 18:16:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:05.634 18:16:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:05.634 18:16:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:05.634 18:16:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:05.634 18:16:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:05.634 18:16:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:05.634 18:16:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:05.634 18:16:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:05.634 18:16:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:05.634 18:16:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:05.634 18:16:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:05.634 18:16:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:05.634 18:16:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:05.634 18:16:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:05.634 18:16:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:05.634 18:16:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:05.634 18:16:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:05.634 18:16:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:05.634 18:16:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:05.634 18:16:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:05.634 18:16:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:05.634 18:16:17 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:05.634 ************************************ 00:09:05.634 END TEST accel_decomp_full_mthread 00:09:05.634 ************************************ 00:09:05.634 18:16:17 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:05.634 18:16:17 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:05.634 00:09:05.634 real 0m2.580s 00:09:05.634 user 0m2.279s 00:09:05.634 sys 0m0.206s 00:09:05.634 18:16:17 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:05.634 18:16:17 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:09:05.634 18:16:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:05.634 18:16:17 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:09:05.634 18:16:17 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:09:05.634 18:16:17 accel -- accel/accel.sh@137 -- # build_accel_config 00:09:05.634 18:16:17 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:05.634 18:16:17 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:05.634 18:16:17 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:05.634 18:16:17 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:05.634 18:16:17 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:05.634 18:16:17 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:05.634 18:16:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:05.634 18:16:17 accel -- accel/accel.sh@40 -- # local IFS=, 00:09:05.634 18:16:17 accel -- accel/accel.sh@41 -- # jq -r . 00:09:05.634 18:16:17 accel -- common/autotest_common.sh@10 -- # set +x 00:09:05.634 ************************************ 00:09:05.634 START TEST accel_dif_functional_tests 00:09:05.634 ************************************ 00:09:05.634 18:16:17 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:09:05.894 [2024-07-22 18:16:17.712000] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:05.894 [2024-07-22 18:16:17.712333] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63910 ] 00:09:06.152 [2024-07-22 18:16:17.918921] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:06.411 [2024-07-22 18:16:18.214829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:06.411 [2024-07-22 18:16:18.214950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.411 [2024-07-22 18:16:18.214951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:06.411 [2024-07-22 18:16:18.413902] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:06.669 00:09:06.669 00:09:06.670 CUnit - A unit testing framework for C - Version 2.1-3 00:09:06.670 http://cunit.sourceforge.net/ 00:09:06.670 00:09:06.670 00:09:06.670 Suite: accel_dif 00:09:06.670 Test: verify: DIF generated, GUARD check ...passed 00:09:06.670 Test: verify: DIF generated, APPTAG check ...passed 00:09:06.670 Test: verify: DIF generated, REFTAG check ...passed 00:09:06.670 Test: verify: DIF not generated, GUARD check ...passed 00:09:06.670 Test: verify: DIF not generated, APPTAG check ...passed 00:09:06.670 Test: verify: DIF not generated, REFTAG check ...passed 00:09:06.670 Test: verify: APPTAG correct, APPTAG check ...passed 00:09:06.670 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-22 18:16:18.527354] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:09:06.670 [2024-07-22 18:16:18.527477] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:09:06.670 [2024-07-22 18:16:18.527537] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:09:06.670 [2024-07-22 18:16:18.527643] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:09:06.670 passed 00:09:06.670 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:09:06.670 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:09:06.670 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:09:06.670 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:09:06.670 Test: verify copy: DIF generated, GUARD check ...[2024-07-22 18:16:18.527868] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:09:06.670 passed 00:09:06.670 Test: verify copy: DIF generated, APPTAG check ...passed 00:09:06.670 Test: verify copy: DIF generated, REFTAG check ...passed 00:09:06.670 Test: verify copy: DIF not generated, GUARD check ...passed 00:09:06.670 Test: verify copy: DIF not generated, APPTAG check ...passed 00:09:06.670 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-22 18:16:18.528144] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:09:06.670 [2024-07-22 18:16:18.528218] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:09:06.670 [2024-07-22 18:16:18.528281] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:09:06.670 passed 00:09:06.670 Test: generate copy: DIF generated, GUARD check ...passed 00:09:06.670 Test: generate copy: DIF generated, APTTAG check ...passed 00:09:06.670 Test: generate copy: DIF generated, REFTAG check ...passed 00:09:06.670 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:09:06.670 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:09:06.670 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:09:06.670 Test: generate copy: iovecs-len validate ...passed 00:09:06.670 Test: generate copy: buffer alignment validate ...passed 00:09:06.670 00:09:06.670 Run Summary: Type Total Ran Passed Failed Inactive 00:09:06.670 suites 1 1 n/a 0 0 00:09:06.670 tests 26 26 26 0 0 00:09:06.670 asserts 115 115 115 0 n/a 00:09:06.670 00:09:06.670 Elapsed time = 0.005 seconds 00:09:06.670 [2024-07-22 18:16:18.528673] dif.c:1225:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:09:08.099 00:09:08.099 real 0m2.055s 00:09:08.099 user 0m3.703s 00:09:08.099 sys 0m0.320s 00:09:08.099 ************************************ 00:09:08.099 END TEST accel_dif_functional_tests 00:09:08.099 ************************************ 00:09:08.099 18:16:19 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:08.099 18:16:19 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:09:08.099 18:16:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:08.099 00:09:08.099 real 1m2.236s 00:09:08.099 user 1m6.849s 00:09:08.099 sys 0m6.216s 00:09:08.099 18:16:19 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:08.099 18:16:19 accel -- common/autotest_common.sh@10 -- # set +x 00:09:08.099 ************************************ 00:09:08.099 END TEST accel 00:09:08.099 ************************************ 00:09:08.099 18:16:19 -- common/autotest_common.sh@1142 -- # return 0 00:09:08.099 18:16:19 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:09:08.099 18:16:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:08.099 18:16:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:08.099 18:16:19 -- common/autotest_common.sh@10 -- # set +x 00:09:08.099 ************************************ 00:09:08.099 START TEST accel_rpc 00:09:08.099 ************************************ 00:09:08.099 18:16:19 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:09:08.099 * Looking for test storage... 00:09:08.099 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:09:08.099 18:16:19 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:08.099 18:16:19 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=63993 00:09:08.099 18:16:19 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:09:08.099 18:16:19 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 63993 00:09:08.099 18:16:19 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 63993 ']' 00:09:08.099 18:16:19 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.099 18:16:19 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:08.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.099 18:16:19 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.099 18:16:19 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:08.099 18:16:19 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:08.099 [2024-07-22 18:16:19.952338] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:08.099 [2024-07-22 18:16:19.952563] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63993 ] 00:09:08.358 [2024-07-22 18:16:20.123015] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.358 [2024-07-22 18:16:20.361679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.925 18:16:20 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:08.925 18:16:20 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:09:08.925 18:16:20 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:09:08.925 18:16:20 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:09:08.925 18:16:20 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:09:08.925 18:16:20 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:09:08.925 18:16:20 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:09:08.925 18:16:20 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:08.925 18:16:20 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:08.925 18:16:20 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:08.925 ************************************ 00:09:08.925 START TEST accel_assign_opcode 00:09:08.925 ************************************ 00:09:08.925 18:16:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:09:08.925 18:16:20 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:09:08.925 18:16:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.925 18:16:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:08.925 [2024-07-22 18:16:20.846670] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:09:08.925 18:16:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.925 18:16:20 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:09:08.925 18:16:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.925 18:16:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:08.925 [2024-07-22 18:16:20.854626] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:09:08.925 18:16:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.925 18:16:20 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:09:08.925 18:16:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.925 18:16:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:09.183 [2024-07-22 18:16:21.054892] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:09.750 18:16:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.750 18:16:21 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:09:09.750 18:16:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.750 18:16:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:09.750 18:16:21 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:09:09.750 18:16:21 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:09:09.750 18:16:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.750 software 00:09:09.750 00:09:09.750 real 0m0.805s 00:09:09.750 user 0m0.055s 00:09:09.750 sys 0m0.010s 00:09:09.750 18:16:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:09.750 18:16:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:09.750 ************************************ 00:09:09.750 END TEST accel_assign_opcode 00:09:09.750 ************************************ 00:09:09.750 18:16:21 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:09:09.750 18:16:21 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 63993 00:09:09.750 18:16:21 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 63993 ']' 00:09:09.750 18:16:21 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 63993 00:09:09.750 18:16:21 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:09:09.750 18:16:21 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:09.750 18:16:21 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63993 00:09:09.750 18:16:21 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:09.750 18:16:21 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:09.750 18:16:21 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63993' 00:09:09.750 killing process with pid 63993 00:09:09.750 18:16:21 accel_rpc -- common/autotest_common.sh@967 -- # kill 63993 00:09:09.750 18:16:21 accel_rpc -- common/autotest_common.sh@972 -- # wait 63993 00:09:12.296 00:09:12.296 real 0m4.183s 00:09:12.296 user 0m4.074s 00:09:12.296 sys 0m0.598s 00:09:12.296 18:16:23 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:12.296 18:16:23 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:12.296 ************************************ 00:09:12.296 END TEST accel_rpc 00:09:12.296 ************************************ 00:09:12.296 18:16:23 -- common/autotest_common.sh@1142 -- # return 0 00:09:12.296 18:16:23 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:12.296 18:16:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:12.296 18:16:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:12.296 18:16:23 -- common/autotest_common.sh@10 -- # set +x 00:09:12.296 ************************************ 00:09:12.296 START TEST app_cmdline 00:09:12.296 ************************************ 00:09:12.296 18:16:23 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:12.296 * Looking for test storage... 00:09:12.296 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:12.296 18:16:24 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:09:12.296 18:16:24 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=64109 00:09:12.296 18:16:24 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:09:12.296 18:16:24 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 64109 00:09:12.296 18:16:24 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 64109 ']' 00:09:12.296 18:16:24 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.296 18:16:24 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:12.296 18:16:24 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.296 18:16:24 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:12.296 18:16:24 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:12.296 [2024-07-22 18:16:24.187430] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:12.296 [2024-07-22 18:16:24.187607] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64109 ] 00:09:12.554 [2024-07-22 18:16:24.361685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.813 [2024-07-22 18:16:24.611190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.813 [2024-07-22 18:16:24.818293] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:13.769 18:16:25 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:13.769 18:16:25 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:09:13.769 18:16:25 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:09:13.769 { 00:09:13.769 "version": "SPDK v24.09-pre git sha1 f7b31b2b9", 00:09:13.769 "fields": { 00:09:13.769 "major": 24, 00:09:13.769 "minor": 9, 00:09:13.769 "patch": 0, 00:09:13.769 "suffix": "-pre", 00:09:13.769 "commit": "f7b31b2b9" 00:09:13.769 } 00:09:13.769 } 00:09:13.769 18:16:25 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:09:13.769 18:16:25 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:09:13.769 18:16:25 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:09:13.769 18:16:25 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:09:13.769 18:16:25 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:09:13.769 18:16:25 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:09:13.769 18:16:25 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.769 18:16:25 app_cmdline -- app/cmdline.sh@26 -- # sort 00:09:13.769 18:16:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:13.769 18:16:25 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.769 18:16:25 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:09:13.769 18:16:25 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:09:13.769 18:16:25 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:13.769 18:16:25 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:09:13.769 18:16:25 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:13.769 18:16:25 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:13.769 18:16:25 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:13.769 18:16:25 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:13.769 18:16:25 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:13.769 18:16:25 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:13.769 18:16:25 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:13.769 18:16:25 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:13.769 18:16:25 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:13.769 18:16:25 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:14.028 request: 00:09:14.028 { 00:09:14.028 "method": "env_dpdk_get_mem_stats", 00:09:14.028 "req_id": 1 00:09:14.028 } 00:09:14.028 Got JSON-RPC error response 00:09:14.028 response: 00:09:14.028 { 00:09:14.028 "code": -32601, 00:09:14.028 "message": "Method not found" 00:09:14.028 } 00:09:14.028 18:16:26 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:09:14.028 18:16:26 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:14.028 18:16:26 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:14.028 18:16:26 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:14.028 18:16:26 app_cmdline -- app/cmdline.sh@1 -- # killprocess 64109 00:09:14.028 18:16:26 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 64109 ']' 00:09:14.028 18:16:26 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 64109 00:09:14.028 18:16:26 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:09:14.028 18:16:26 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:14.028 18:16:26 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64109 00:09:14.287 18:16:26 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:14.287 killing process with pid 64109 00:09:14.287 18:16:26 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:14.287 18:16:26 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64109' 00:09:14.287 18:16:26 app_cmdline -- common/autotest_common.sh@967 -- # kill 64109 00:09:14.287 18:16:26 app_cmdline -- common/autotest_common.sh@972 -- # wait 64109 00:09:16.251 00:09:16.251 real 0m4.272s 00:09:16.251 user 0m4.664s 00:09:16.251 sys 0m0.654s 00:09:16.251 18:16:28 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:16.251 18:16:28 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:16.251 ************************************ 00:09:16.251 END TEST app_cmdline 00:09:16.251 ************************************ 00:09:16.509 18:16:28 -- common/autotest_common.sh@1142 -- # return 0 00:09:16.509 18:16:28 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:16.509 18:16:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:16.509 18:16:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:16.509 18:16:28 -- common/autotest_common.sh@10 -- # set +x 00:09:16.509 ************************************ 00:09:16.509 START TEST version 00:09:16.509 ************************************ 00:09:16.509 18:16:28 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:16.509 * Looking for test storage... 00:09:16.509 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:16.509 18:16:28 version -- app/version.sh@17 -- # get_header_version major 00:09:16.509 18:16:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:16.509 18:16:28 version -- app/version.sh@14 -- # cut -f2 00:09:16.509 18:16:28 version -- app/version.sh@14 -- # tr -d '"' 00:09:16.509 18:16:28 version -- app/version.sh@17 -- # major=24 00:09:16.509 18:16:28 version -- app/version.sh@18 -- # get_header_version minor 00:09:16.509 18:16:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:16.509 18:16:28 version -- app/version.sh@14 -- # cut -f2 00:09:16.509 18:16:28 version -- app/version.sh@14 -- # tr -d '"' 00:09:16.509 18:16:28 version -- app/version.sh@18 -- # minor=9 00:09:16.509 18:16:28 version -- app/version.sh@19 -- # get_header_version patch 00:09:16.509 18:16:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:16.509 18:16:28 version -- app/version.sh@14 -- # cut -f2 00:09:16.509 18:16:28 version -- app/version.sh@14 -- # tr -d '"' 00:09:16.509 18:16:28 version -- app/version.sh@19 -- # patch=0 00:09:16.509 18:16:28 version -- app/version.sh@20 -- # get_header_version suffix 00:09:16.509 18:16:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:16.509 18:16:28 version -- app/version.sh@14 -- # cut -f2 00:09:16.509 18:16:28 version -- app/version.sh@14 -- # tr -d '"' 00:09:16.509 18:16:28 version -- app/version.sh@20 -- # suffix=-pre 00:09:16.509 18:16:28 version -- app/version.sh@22 -- # version=24.9 00:09:16.509 18:16:28 version -- app/version.sh@25 -- # (( patch != 0 )) 00:09:16.509 18:16:28 version -- app/version.sh@28 -- # version=24.9rc0 00:09:16.509 18:16:28 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:09:16.509 18:16:28 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:09:16.509 18:16:28 version -- app/version.sh@30 -- # py_version=24.9rc0 00:09:16.509 18:16:28 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:09:16.509 00:09:16.509 real 0m0.155s 00:09:16.509 user 0m0.093s 00:09:16.509 sys 0m0.094s 00:09:16.509 18:16:28 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:16.509 18:16:28 version -- common/autotest_common.sh@10 -- # set +x 00:09:16.509 ************************************ 00:09:16.509 END TEST version 00:09:16.509 ************************************ 00:09:16.510 18:16:28 -- common/autotest_common.sh@1142 -- # return 0 00:09:16.510 18:16:28 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:09:16.510 18:16:28 -- spdk/autotest.sh@198 -- # uname -s 00:09:16.510 18:16:28 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:09:16.510 18:16:28 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:09:16.510 18:16:28 -- spdk/autotest.sh@199 -- # [[ 1 -eq 1 ]] 00:09:16.510 18:16:28 -- spdk/autotest.sh@205 -- # [[ 0 -eq 0 ]] 00:09:16.510 18:16:28 -- spdk/autotest.sh@206 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:09:16.510 18:16:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:16.510 18:16:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:16.510 18:16:28 -- common/autotest_common.sh@10 -- # set +x 00:09:16.510 ************************************ 00:09:16.510 START TEST spdk_dd 00:09:16.510 ************************************ 00:09:16.510 18:16:28 spdk_dd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:09:16.768 * Looking for test storage... 00:09:16.768 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:16.768 18:16:28 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:16.768 18:16:28 spdk_dd -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:16.768 18:16:28 spdk_dd -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:16.768 18:16:28 spdk_dd -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:16.768 18:16:28 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.768 18:16:28 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.768 18:16:28 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.768 18:16:28 spdk_dd -- paths/export.sh@5 -- # export PATH 00:09:16.768 18:16:28 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.768 18:16:28 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:17.027 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:17.027 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:17.027 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:17.027 18:16:28 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:09:17.027 18:16:28 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:09:17.027 18:16:28 spdk_dd -- scripts/common.sh@309 -- # local bdf bdfs 00:09:17.027 18:16:28 spdk_dd -- scripts/common.sh@310 -- # local nvmes 00:09:17.027 18:16:28 spdk_dd -- scripts/common.sh@312 -- # [[ -n '' ]] 00:09:17.027 18:16:28 spdk_dd -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:17.027 18:16:28 spdk_dd -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:09:17.027 18:16:28 spdk_dd -- scripts/common.sh@295 -- # local bdf= 00:09:17.027 18:16:28 spdk_dd -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:09:17.027 18:16:28 spdk_dd -- scripts/common.sh@230 -- # local class 00:09:17.027 18:16:28 spdk_dd -- scripts/common.sh@231 -- # local subclass 00:09:17.027 18:16:28 spdk_dd -- scripts/common.sh@232 -- # local progif 00:09:17.027 18:16:28 spdk_dd -- scripts/common.sh@233 -- # printf %02x 1 00:09:17.027 18:16:28 spdk_dd -- scripts/common.sh@233 -- # class=01 00:09:17.027 18:16:28 spdk_dd -- scripts/common.sh@234 -- # printf %02x 8 00:09:17.027 18:16:28 spdk_dd -- scripts/common.sh@234 -- # subclass=08 00:09:17.027 18:16:28 spdk_dd -- scripts/common.sh@235 -- # printf %02x 2 00:09:17.027 18:16:28 spdk_dd -- scripts/common.sh@235 -- # progif=02 00:09:17.027 18:16:28 spdk_dd -- scripts/common.sh@237 -- # hash lspci 00:09:17.027 18:16:28 spdk_dd -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:09:17.027 18:16:28 spdk_dd -- scripts/common.sh@239 -- # lspci -mm -n -D 00:09:17.027 18:16:28 spdk_dd -- scripts/common.sh@240 -- # grep -i -- -p02 00:09:17.027 18:16:28 spdk_dd -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:17.027 18:16:28 spdk_dd -- scripts/common.sh@242 -- # tr -d '"' 00:09:17.027 18:16:28 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:17.027 18:16:28 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:09:17.027 18:16:28 spdk_dd -- scripts/common.sh@15 -- # local i 00:09:17.027 18:16:28 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:09:17.027 18:16:28 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:09:17.027 18:16:28 spdk_dd -- scripts/common.sh@24 -- # return 0 00:09:17.027 18:16:28 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:09:17.027 18:16:28 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:17.027 18:16:28 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:09:17.027 18:16:28 spdk_dd -- scripts/common.sh@15 -- # local i 00:09:17.027 18:16:28 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:09:17.027 18:16:28 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:09:17.027 18:16:28 spdk_dd -- scripts/common.sh@24 -- # return 0 00:09:17.027 18:16:28 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:09:17.027 18:16:28 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:09:17.027 18:16:28 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:17.027 18:16:29 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:09:17.027 18:16:29 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:09:17.027 18:16:29 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:09:17.027 18:16:29 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:09:17.027 18:16:29 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:17.027 18:16:29 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:09:17.027 18:16:29 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:09:17.027 18:16:29 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:09:17.027 18:16:29 spdk_dd -- scripts/common.sh@325 -- # (( 2 )) 00:09:17.027 18:16:29 spdk_dd -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:09:17.027 18:16:29 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:09:17.027 18:16:29 spdk_dd -- dd/common.sh@139 -- # local lib 00:09:17.027 18:16:29 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:09:17.027 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.027 18:16:29 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:09:17.027 18:16:29 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:17.027 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libasan.so.8 == liburing.so.* ]] 00:09:17.027 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.027 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:09:17.027 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.027 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:09:17.027 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.027 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:09:17.027 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.027 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:09:17.027 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.027 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:09:17.027 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.027 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:09:17.027 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.027 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:09:17.027 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.027 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:09:17.027 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.027 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:09:17.027 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.027 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:09:17.027 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.027 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:09:17.027 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.027 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:09:17.027 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.027 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:09:17.027 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.027 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:09:17.027 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.027 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:09:17.027 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.027 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:09:17.027 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.027 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.13.1 == liburing.so.* ]] 00:09:17.027 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.027 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:09:17.027 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.027 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:09:17.027 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.027 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:09:17.027 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.027 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:09:17.027 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.027 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:09:17.027 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.027 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:09:17.027 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.027 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:09:17.027 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.027 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:09:17.027 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.027 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:09:17.027 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.027 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:09:17.027 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.027 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:09:17.028 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.028 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:09:17.028 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.028 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:09:17.028 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.028 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:09:17.028 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.028 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:09:17.028 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.028 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.0 == liburing.so.* ]] 00:09:17.028 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.028 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:09:17.028 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.028 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.0 == liburing.so.* ]] 00:09:17.028 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.028 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:09:17.028 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.028 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:09:17.028 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.028 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:09:17.028 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.028 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:09:17.028 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.028 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfu_device.so.3.0 == liburing.so.* ]] 00:09:17.028 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.028 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scsi.so.9.0 == liburing.so.* ]] 00:09:17.028 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.028 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfu_tgt.so.3.0 == liburing.so.* ]] 00:09:17.028 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.028 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.1.0 == liburing.so.* ]] 00:09:17.028 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.028 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:09:17.028 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.028 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:09:17.028 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.028 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:09:17.028 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.028 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.16.0 == liburing.so.* ]] 00:09:17.028 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.028 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:09:17.028 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.028 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:09:17.028 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.028 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:09:17.028 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.028 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.4.0 == liburing.so.* ]] 00:09:17.028 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.028 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:09:17.028 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.028 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:09:17.028 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.028 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:09:17.028 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.028 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:09:17.028 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.028 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:09:17.287 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.287 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:09:17.287 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.287 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.5.0 == liburing.so.* ]] 00:09:17.287 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.287 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.1 == liburing.so.* ]] 00:09:17.287 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.287 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.10.0 == liburing.so.* ]] 00:09:17.287 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.287 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.1.0 == liburing.so.* ]] 00:09:17.287 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.287 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:09:17.287 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.287 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:09:17.287 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.287 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:09:17.287 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.287 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.0 == liburing.so.* ]] 00:09:17.287 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.287 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:09:17.287 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.287 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:09:17.287 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.287 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:09:17.287 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.287 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:09:17.287 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.287 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:09:17.287 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.287 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:09:17.287 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.287 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:09:17.288 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.288 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:09:17.288 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.288 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:09:17.288 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.288 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:09:17.288 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.288 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:09:17.288 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.288 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:09:17.288 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.288 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:09:17.288 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.288 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:09:17.288 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.288 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:09:17.288 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.288 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:09:17.288 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.288 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:09:17.288 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.288 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:09:17.288 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.288 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:09:17.288 18:16:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:09:17.288 18:16:29 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:09:17.288 18:16:29 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:09:17.288 * spdk_dd linked to liburing 00:09:17.288 18:16:29 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:09:17.288 18:16:29 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=y 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@70 -- # CONFIG_FC=n 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:09:17.288 18:16:29 spdk_dd -- common/build_config.sh@83 -- # CONFIG_URING=y 00:09:17.288 18:16:29 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:09:17.288 18:16:29 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:09:17.288 18:16:29 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:09:17.288 18:16:29 spdk_dd -- dd/common.sh@153 -- # return 0 00:09:17.288 18:16:29 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:09:17.288 18:16:29 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:09:17.288 18:16:29 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:17.288 18:16:29 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:17.288 18:16:29 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:17.288 ************************************ 00:09:17.288 START TEST spdk_dd_basic_rw 00:09:17.288 ************************************ 00:09:17.288 18:16:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:09:17.288 * Looking for test storage... 00:09:17.288 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:17.288 18:16:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:17.288 18:16:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:17.288 18:16:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:17.288 18:16:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:17.288 18:16:29 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.289 18:16:29 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.289 18:16:29 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.289 18:16:29 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:09:17.289 18:16:29 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.289 18:16:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:09:17.289 18:16:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:09:17.289 18:16:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:09:17.289 18:16:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:09:17.289 18:16:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:09:17.289 18:16:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:09:17.289 18:16:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:09:17.289 18:16:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:17.289 18:16:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:17.289 18:16:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:09:17.289 18:16:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:09:17.289 18:16:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:09:17.289 18:16:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:09:17.573 18:16:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:09:17.573 18:16:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:09:17.574 18:16:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:09:17.574 18:16:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:09:17.574 18:16:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:09:17.574 18:16:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:09:17.574 18:16:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:09:17.574 18:16:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:09:17.574 18:16:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:09:17.574 18:16:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:09:17.574 18:16:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:17.574 18:16:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:17.574 18:16:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:09:17.574 18:16:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:09:17.574 ************************************ 00:09:17.574 START TEST dd_bs_lt_native_bs 00:09:17.574 ************************************ 00:09:17.574 18:16:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1123 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:09:17.574 18:16:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@648 -- # local es=0 00:09:17.574 18:16:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:09:17.574 18:16:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:17.574 18:16:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:17.574 18:16:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:17.574 18:16:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:17.574 18:16:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:17.574 18:16:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:17.574 18:16:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:17.574 18:16:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:17.574 18:16:29 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:09:17.574 { 00:09:17.574 "subsystems": [ 00:09:17.574 { 00:09:17.574 "subsystem": "bdev", 00:09:17.574 "config": [ 00:09:17.574 { 00:09:17.574 "params": { 00:09:17.574 "trtype": "pcie", 00:09:17.574 "traddr": "0000:00:10.0", 00:09:17.574 "name": "Nvme0" 00:09:17.574 }, 00:09:17.574 "method": "bdev_nvme_attach_controller" 00:09:17.574 }, 00:09:17.574 { 00:09:17.574 "method": "bdev_wait_for_examine" 00:09:17.574 } 00:09:17.574 ] 00:09:17.574 } 00:09:17.574 ] 00:09:17.574 } 00:09:17.574 [2024-07-22 18:16:29.569067] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:17.574 [2024-07-22 18:16:29.569304] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64455 ] 00:09:17.832 [2024-07-22 18:16:29.745170] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.089 [2024-07-22 18:16:30.017388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.347 [2024-07-22 18:16:30.224113] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:18.605 [2024-07-22 18:16:30.409993] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:09:18.605 [2024-07-22 18:16:30.410067] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:19.171 [2024-07-22 18:16:30.955387] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:19.428 18:16:31 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # es=234 00:09:19.428 18:16:31 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:19.428 18:16:31 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@660 -- # es=106 00:09:19.428 18:16:31 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # case "$es" in 00:09:19.428 18:16:31 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@668 -- # es=1 00:09:19.428 18:16:31 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:19.428 00:09:19.428 real 0m1.958s 00:09:19.428 user 0m1.616s 00:09:19.428 sys 0m0.284s 00:09:19.428 18:16:31 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:19.428 ************************************ 00:09:19.428 END TEST dd_bs_lt_native_bs 00:09:19.428 ************************************ 00:09:19.428 18:16:31 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:09:19.687 18:16:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:09:19.687 18:16:31 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:09:19.687 18:16:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:19.687 18:16:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:19.687 18:16:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:09:19.687 ************************************ 00:09:19.687 START TEST dd_rw 00:09:19.687 ************************************ 00:09:19.687 18:16:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1123 -- # basic_rw 4096 00:09:19.687 18:16:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:09:19.687 18:16:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:09:19.687 18:16:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:09:19.687 18:16:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:09:19.687 18:16:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:09:19.687 18:16:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:09:19.687 18:16:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:09:19.687 18:16:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:09:19.687 18:16:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:09:19.687 18:16:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:09:19.687 18:16:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:09:19.687 18:16:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:09:19.687 18:16:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:09:19.687 18:16:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:09:19.687 18:16:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:09:19.687 18:16:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:09:19.687 18:16:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:09:19.687 18:16:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:20.254 18:16:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:09:20.254 18:16:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:09:20.254 18:16:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:20.254 18:16:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:20.254 { 00:09:20.254 "subsystems": [ 00:09:20.254 { 00:09:20.254 "subsystem": "bdev", 00:09:20.254 "config": [ 00:09:20.254 { 00:09:20.254 "params": { 00:09:20.254 "trtype": "pcie", 00:09:20.254 "traddr": "0000:00:10.0", 00:09:20.254 "name": "Nvme0" 00:09:20.254 }, 00:09:20.254 "method": "bdev_nvme_attach_controller" 00:09:20.254 }, 00:09:20.254 { 00:09:20.254 "method": "bdev_wait_for_examine" 00:09:20.254 } 00:09:20.254 ] 00:09:20.254 } 00:09:20.254 ] 00:09:20.254 } 00:09:20.254 [2024-07-22 18:16:32.151843] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:20.254 [2024-07-22 18:16:32.152018] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64504 ] 00:09:20.513 [2024-07-22 18:16:32.330046] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.772 [2024-07-22 18:16:32.577179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.772 [2024-07-22 18:16:32.786345] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:22.407  Copying: 60/60 [kB] (average 19 MBps) 00:09:22.407 00:09:22.407 18:16:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:09:22.407 18:16:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:09:22.407 18:16:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:22.407 18:16:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:22.407 { 00:09:22.407 "subsystems": [ 00:09:22.407 { 00:09:22.407 "subsystem": "bdev", 00:09:22.407 "config": [ 00:09:22.407 { 00:09:22.407 "params": { 00:09:22.407 "trtype": "pcie", 00:09:22.407 "traddr": "0000:00:10.0", 00:09:22.407 "name": "Nvme0" 00:09:22.407 }, 00:09:22.407 "method": "bdev_nvme_attach_controller" 00:09:22.407 }, 00:09:22.407 { 00:09:22.407 "method": "bdev_wait_for_examine" 00:09:22.407 } 00:09:22.407 ] 00:09:22.407 } 00:09:22.407 ] 00:09:22.407 } 00:09:22.407 [2024-07-22 18:16:34.287188] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:22.407 [2024-07-22 18:16:34.287382] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64535 ] 00:09:22.710 [2024-07-22 18:16:34.458718] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.710 [2024-07-22 18:16:34.704785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.968 [2024-07-22 18:16:34.911463] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:24.160  Copying: 60/60 [kB] (average 29 MBps) 00:09:24.160 00:09:24.160 18:16:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:24.160 18:16:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:09:24.160 18:16:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:09:24.160 18:16:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:09:24.160 18:16:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:09:24.160 18:16:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:09:24.160 18:16:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:09:24.160 18:16:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:09:24.160 18:16:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:09:24.160 18:16:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:24.160 18:16:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:24.160 { 00:09:24.160 "subsystems": [ 00:09:24.160 { 00:09:24.160 "subsystem": "bdev", 00:09:24.160 "config": [ 00:09:24.160 { 00:09:24.160 "params": { 00:09:24.160 "trtype": "pcie", 00:09:24.160 "traddr": "0000:00:10.0", 00:09:24.160 "name": "Nvme0" 00:09:24.160 }, 00:09:24.160 "method": "bdev_nvme_attach_controller" 00:09:24.160 }, 00:09:24.160 { 00:09:24.160 "method": "bdev_wait_for_examine" 00:09:24.160 } 00:09:24.160 ] 00:09:24.160 } 00:09:24.160 ] 00:09:24.160 } 00:09:24.419 [2024-07-22 18:16:36.209155] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:24.419 [2024-07-22 18:16:36.209358] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64568 ] 00:09:24.419 [2024-07-22 18:16:36.382829] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.677 [2024-07-22 18:16:36.630146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.935 [2024-07-22 18:16:36.838816] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:26.197  Copying: 1024/1024 [kB] (average 500 MBps) 00:09:26.197 00:09:26.455 18:16:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:09:26.455 18:16:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:09:26.455 18:16:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:09:26.455 18:16:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:09:26.456 18:16:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:09:26.456 18:16:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:09:26.456 18:16:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:27.023 18:16:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:09:27.023 18:16:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:09:27.023 18:16:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:27.023 18:16:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:27.023 { 00:09:27.023 "subsystems": [ 00:09:27.023 { 00:09:27.023 "subsystem": "bdev", 00:09:27.023 "config": [ 00:09:27.023 { 00:09:27.023 "params": { 00:09:27.023 "trtype": "pcie", 00:09:27.023 "traddr": "0000:00:10.0", 00:09:27.023 "name": "Nvme0" 00:09:27.023 }, 00:09:27.023 "method": "bdev_nvme_attach_controller" 00:09:27.023 }, 00:09:27.023 { 00:09:27.023 "method": "bdev_wait_for_examine" 00:09:27.023 } 00:09:27.023 ] 00:09:27.023 } 00:09:27.023 ] 00:09:27.023 } 00:09:27.023 [2024-07-22 18:16:39.012457] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:27.023 [2024-07-22 18:16:39.012635] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64605 ] 00:09:27.282 [2024-07-22 18:16:39.190388] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.540 [2024-07-22 18:16:39.506739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.799 [2024-07-22 18:16:39.715014] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:28.992  Copying: 60/60 [kB] (average 58 MBps) 00:09:28.992 00:09:28.992 18:16:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:09:28.992 18:16:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:09:28.992 18:16:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:28.992 18:16:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:28.992 { 00:09:28.992 "subsystems": [ 00:09:28.992 { 00:09:28.992 "subsystem": "bdev", 00:09:28.992 "config": [ 00:09:28.992 { 00:09:28.992 "params": { 00:09:28.992 "trtype": "pcie", 00:09:28.992 "traddr": "0000:00:10.0", 00:09:28.992 "name": "Nvme0" 00:09:28.992 }, 00:09:28.993 "method": "bdev_nvme_attach_controller" 00:09:28.993 }, 00:09:28.993 { 00:09:28.993 "method": "bdev_wait_for_examine" 00:09:28.993 } 00:09:28.993 ] 00:09:28.993 } 00:09:28.993 ] 00:09:28.993 } 00:09:29.252 [2024-07-22 18:16:41.028322] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:29.252 [2024-07-22 18:16:41.028494] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64631 ] 00:09:29.252 [2024-07-22 18:16:41.207907] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.511 [2024-07-22 18:16:41.502377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.770 [2024-07-22 18:16:41.707703] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:31.442  Copying: 60/60 [kB] (average 58 MBps) 00:09:31.442 00:09:31.442 18:16:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:31.442 18:16:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:09:31.442 18:16:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:09:31.442 18:16:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:09:31.442 18:16:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:09:31.442 18:16:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:09:31.442 18:16:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:09:31.442 18:16:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:09:31.442 18:16:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:09:31.442 18:16:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:31.442 18:16:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:31.442 { 00:09:31.442 "subsystems": [ 00:09:31.442 { 00:09:31.442 "subsystem": "bdev", 00:09:31.442 "config": [ 00:09:31.442 { 00:09:31.442 "params": { 00:09:31.442 "trtype": "pcie", 00:09:31.442 "traddr": "0000:00:10.0", 00:09:31.442 "name": "Nvme0" 00:09:31.442 }, 00:09:31.442 "method": "bdev_nvme_attach_controller" 00:09:31.442 }, 00:09:31.442 { 00:09:31.442 "method": "bdev_wait_for_examine" 00:09:31.442 } 00:09:31.442 ] 00:09:31.442 } 00:09:31.442 ] 00:09:31.442 } 00:09:31.442 [2024-07-22 18:16:43.265491] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:31.442 [2024-07-22 18:16:43.265666] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64669 ] 00:09:31.442 [2024-07-22 18:16:43.455810] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.702 [2024-07-22 18:16:43.705065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.961 [2024-07-22 18:16:43.911544] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:33.155  Copying: 1024/1024 [kB] (average 500 MBps) 00:09:33.155 00:09:33.155 18:16:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:09:33.155 18:16:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:09:33.155 18:16:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:09:33.155 18:16:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:09:33.155 18:16:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:09:33.155 18:16:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:09:33.155 18:16:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:09:33.155 18:16:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:34.090 18:16:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:09:34.090 18:16:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:09:34.090 18:16:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:34.090 18:16:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:34.090 { 00:09:34.090 "subsystems": [ 00:09:34.090 { 00:09:34.090 "subsystem": "bdev", 00:09:34.090 "config": [ 00:09:34.090 { 00:09:34.090 "params": { 00:09:34.090 "trtype": "pcie", 00:09:34.090 "traddr": "0000:00:10.0", 00:09:34.090 "name": "Nvme0" 00:09:34.090 }, 00:09:34.090 "method": "bdev_nvme_attach_controller" 00:09:34.090 }, 00:09:34.090 { 00:09:34.090 "method": "bdev_wait_for_examine" 00:09:34.090 } 00:09:34.090 ] 00:09:34.090 } 00:09:34.090 ] 00:09:34.090 } 00:09:34.090 [2024-07-22 18:16:45.887900] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:34.090 [2024-07-22 18:16:45.888092] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64705 ] 00:09:34.090 [2024-07-22 18:16:46.063651] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.348 [2024-07-22 18:16:46.357923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.606 [2024-07-22 18:16:46.572930] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:36.298  Copying: 56/56 [kB] (average 54 MBps) 00:09:36.298 00:09:36.298 18:16:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:09:36.298 18:16:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:09:36.298 18:16:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:36.298 18:16:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:36.298 { 00:09:36.298 "subsystems": [ 00:09:36.298 { 00:09:36.298 "subsystem": "bdev", 00:09:36.298 "config": [ 00:09:36.298 { 00:09:36.298 "params": { 00:09:36.298 "trtype": "pcie", 00:09:36.298 "traddr": "0000:00:10.0", 00:09:36.298 "name": "Nvme0" 00:09:36.298 }, 00:09:36.298 "method": "bdev_nvme_attach_controller" 00:09:36.298 }, 00:09:36.298 { 00:09:36.298 "method": "bdev_wait_for_examine" 00:09:36.298 } 00:09:36.298 ] 00:09:36.298 } 00:09:36.298 ] 00:09:36.298 } 00:09:36.298 [2024-07-22 18:16:48.162675] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:36.298 [2024-07-22 18:16:48.162878] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64736 ] 00:09:36.556 [2024-07-22 18:16:48.339436] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.814 [2024-07-22 18:16:48.635854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.072 [2024-07-22 18:16:48.874763] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:38.449  Copying: 56/56 [kB] (average 27 MBps) 00:09:38.449 00:09:38.449 18:16:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:38.449 18:16:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:09:38.449 18:16:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:09:38.449 18:16:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:09:38.449 18:16:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:09:38.449 18:16:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:09:38.449 18:16:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:09:38.449 18:16:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:09:38.449 18:16:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:09:38.449 18:16:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:38.449 18:16:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:38.449 { 00:09:38.449 "subsystems": [ 00:09:38.449 { 00:09:38.449 "subsystem": "bdev", 00:09:38.449 "config": [ 00:09:38.449 { 00:09:38.449 "params": { 00:09:38.449 "trtype": "pcie", 00:09:38.449 "traddr": "0000:00:10.0", 00:09:38.449 "name": "Nvme0" 00:09:38.449 }, 00:09:38.449 "method": "bdev_nvme_attach_controller" 00:09:38.449 }, 00:09:38.449 { 00:09:38.449 "method": "bdev_wait_for_examine" 00:09:38.449 } 00:09:38.449 ] 00:09:38.449 } 00:09:38.449 ] 00:09:38.449 } 00:09:38.449 [2024-07-22 18:16:50.159469] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:38.449 [2024-07-22 18:16:50.159618] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64769 ] 00:09:38.449 [2024-07-22 18:16:50.320276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.707 [2024-07-22 18:16:50.590759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.967 [2024-07-22 18:16:50.803545] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:40.192  Copying: 1024/1024 [kB] (average 1000 MBps) 00:09:40.192 00:09:40.192 18:16:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:09:40.192 18:16:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:09:40.192 18:16:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:09:40.192 18:16:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:09:40.192 18:16:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:09:40.192 18:16:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:09:40.192 18:16:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:41.126 18:16:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:09:41.126 18:16:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:09:41.126 18:16:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:41.126 18:16:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:41.126 { 00:09:41.126 "subsystems": [ 00:09:41.126 { 00:09:41.126 "subsystem": "bdev", 00:09:41.126 "config": [ 00:09:41.126 { 00:09:41.126 "params": { 00:09:41.126 "trtype": "pcie", 00:09:41.126 "traddr": "0000:00:10.0", 00:09:41.126 "name": "Nvme0" 00:09:41.126 }, 00:09:41.126 "method": "bdev_nvme_attach_controller" 00:09:41.126 }, 00:09:41.126 { 00:09:41.126 "method": "bdev_wait_for_examine" 00:09:41.126 } 00:09:41.126 ] 00:09:41.126 } 00:09:41.126 ] 00:09:41.126 } 00:09:41.126 [2024-07-22 18:16:52.996242] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:41.126 [2024-07-22 18:16:52.996431] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64806 ] 00:09:41.385 [2024-07-22 18:16:53.170429] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.644 [2024-07-22 18:16:53.412774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.644 [2024-07-22 18:16:53.617689] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:42.840  Copying: 56/56 [kB] (average 54 MBps) 00:09:42.840 00:09:42.840 18:16:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:09:42.840 18:16:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:09:42.840 18:16:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:42.840 18:16:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:42.840 { 00:09:42.840 "subsystems": [ 00:09:42.840 { 00:09:42.840 "subsystem": "bdev", 00:09:42.840 "config": [ 00:09:42.840 { 00:09:42.840 "params": { 00:09:42.840 "trtype": "pcie", 00:09:42.840 "traddr": "0000:00:10.0", 00:09:42.840 "name": "Nvme0" 00:09:42.840 }, 00:09:42.840 "method": "bdev_nvme_attach_controller" 00:09:42.841 }, 00:09:42.841 { 00:09:42.841 "method": "bdev_wait_for_examine" 00:09:42.841 } 00:09:42.841 ] 00:09:42.841 } 00:09:42.841 ] 00:09:42.841 } 00:09:43.099 [2024-07-22 18:16:54.891295] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:43.099 [2024-07-22 18:16:54.891580] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64838 ] 00:09:43.099 [2024-07-22 18:16:55.070220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.358 [2024-07-22 18:16:55.322103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.616 [2024-07-22 18:16:55.531988] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:45.246  Copying: 56/56 [kB] (average 54 MBps) 00:09:45.246 00:09:45.246 18:16:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:45.246 18:16:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:09:45.246 18:16:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:09:45.246 18:16:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:09:45.246 18:16:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:09:45.246 18:16:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:09:45.246 18:16:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:09:45.246 18:16:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:09:45.246 18:16:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:09:45.246 18:16:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:45.246 18:16:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:45.246 { 00:09:45.246 "subsystems": [ 00:09:45.246 { 00:09:45.246 "subsystem": "bdev", 00:09:45.246 "config": [ 00:09:45.246 { 00:09:45.246 "params": { 00:09:45.246 "trtype": "pcie", 00:09:45.246 "traddr": "0000:00:10.0", 00:09:45.246 "name": "Nvme0" 00:09:45.246 }, 00:09:45.246 "method": "bdev_nvme_attach_controller" 00:09:45.246 }, 00:09:45.246 { 00:09:45.246 "method": "bdev_wait_for_examine" 00:09:45.246 } 00:09:45.246 ] 00:09:45.246 } 00:09:45.246 ] 00:09:45.246 } 00:09:45.246 [2024-07-22 18:16:57.116280] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:45.247 [2024-07-22 18:16:57.116503] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64871 ] 00:09:45.504 [2024-07-22 18:16:57.291392] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.762 [2024-07-22 18:16:57.538983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.762 [2024-07-22 18:16:57.747278] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:46.953  Copying: 1024/1024 [kB] (average 1000 MBps) 00:09:46.953 00:09:46.953 18:16:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:09:46.953 18:16:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:09:46.953 18:16:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:09:46.953 18:16:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:09:46.953 18:16:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:09:46.953 18:16:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:09:46.953 18:16:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:09:46.953 18:16:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:47.520 18:16:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:09:47.520 18:16:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:09:47.520 18:16:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:47.520 18:16:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:47.520 { 00:09:47.520 "subsystems": [ 00:09:47.520 { 00:09:47.520 "subsystem": "bdev", 00:09:47.520 "config": [ 00:09:47.520 { 00:09:47.520 "params": { 00:09:47.520 "trtype": "pcie", 00:09:47.520 "traddr": "0000:00:10.0", 00:09:47.520 "name": "Nvme0" 00:09:47.520 }, 00:09:47.520 "method": "bdev_nvme_attach_controller" 00:09:47.520 }, 00:09:47.520 { 00:09:47.520 "method": "bdev_wait_for_examine" 00:09:47.520 } 00:09:47.520 ] 00:09:47.520 } 00:09:47.520 ] 00:09:47.520 } 00:09:47.520 [2024-07-22 18:16:59.495760] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:47.520 [2024-07-22 18:16:59.495927] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64909 ] 00:09:47.778 [2024-07-22 18:16:59.671107] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.035 [2024-07-22 18:16:59.951283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.292 [2024-07-22 18:17:00.157184] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:49.518  Copying: 48/48 [kB] (average 46 MBps) 00:09:49.518 00:09:49.776 18:17:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:09:49.776 18:17:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:09:49.776 18:17:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:49.776 18:17:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:49.776 { 00:09:49.776 "subsystems": [ 00:09:49.776 { 00:09:49.776 "subsystem": "bdev", 00:09:49.777 "config": [ 00:09:49.777 { 00:09:49.777 "params": { 00:09:49.777 "trtype": "pcie", 00:09:49.777 "traddr": "0000:00:10.0", 00:09:49.777 "name": "Nvme0" 00:09:49.777 }, 00:09:49.777 "method": "bdev_nvme_attach_controller" 00:09:49.777 }, 00:09:49.777 { 00:09:49.777 "method": "bdev_wait_for_examine" 00:09:49.777 } 00:09:49.777 ] 00:09:49.777 } 00:09:49.777 ] 00:09:49.777 } 00:09:49.777 [2024-07-22 18:17:01.663694] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:49.777 [2024-07-22 18:17:01.664687] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64940 ] 00:09:50.035 [2024-07-22 18:17:01.838737] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.293 [2024-07-22 18:17:02.088365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.293 [2024-07-22 18:17:02.297902] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:51.486  Copying: 48/48 [kB] (average 46 MBps) 00:09:51.486 00:09:51.486 18:17:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:51.486 18:17:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:09:51.486 18:17:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:09:51.486 18:17:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:09:51.486 18:17:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:09:51.486 18:17:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:09:51.486 18:17:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:09:51.486 18:17:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:09:51.486 18:17:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:09:51.486 18:17:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:51.486 18:17:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:51.744 { 00:09:51.744 "subsystems": [ 00:09:51.744 { 00:09:51.744 "subsystem": "bdev", 00:09:51.744 "config": [ 00:09:51.744 { 00:09:51.744 "params": { 00:09:51.744 "trtype": "pcie", 00:09:51.744 "traddr": "0000:00:10.0", 00:09:51.744 "name": "Nvme0" 00:09:51.744 }, 00:09:51.744 "method": "bdev_nvme_attach_controller" 00:09:51.744 }, 00:09:51.744 { 00:09:51.744 "method": "bdev_wait_for_examine" 00:09:51.744 } 00:09:51.744 ] 00:09:51.744 } 00:09:51.744 ] 00:09:51.744 } 00:09:51.744 [2024-07-22 18:17:03.590619] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:51.744 [2024-07-22 18:17:03.591038] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64972 ] 00:09:52.002 [2024-07-22 18:17:03.768943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.002 [2024-07-22 18:17:04.010961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.260 [2024-07-22 18:17:04.215416] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:53.894  Copying: 1024/1024 [kB] (average 1000 MBps) 00:09:53.894 00:09:53.894 18:17:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:09:53.894 18:17:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:09:53.894 18:17:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:09:53.894 18:17:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:09:53.894 18:17:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:09:53.894 18:17:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:09:53.894 18:17:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:54.153 18:17:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:09:54.153 18:17:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:09:54.153 18:17:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:54.153 18:17:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:54.422 { 00:09:54.422 "subsystems": [ 00:09:54.422 { 00:09:54.422 "subsystem": "bdev", 00:09:54.422 "config": [ 00:09:54.422 { 00:09:54.422 "params": { 00:09:54.422 "trtype": "pcie", 00:09:54.422 "traddr": "0000:00:10.0", 00:09:54.422 "name": "Nvme0" 00:09:54.422 }, 00:09:54.422 "method": "bdev_nvme_attach_controller" 00:09:54.422 }, 00:09:54.422 { 00:09:54.422 "method": "bdev_wait_for_examine" 00:09:54.422 } 00:09:54.422 ] 00:09:54.422 } 00:09:54.422 ] 00:09:54.422 } 00:09:54.422 [2024-07-22 18:17:06.214425] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:54.422 [2024-07-22 18:17:06.214565] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65008 ] 00:09:54.422 [2024-07-22 18:17:06.380904] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.709 [2024-07-22 18:17:06.631212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.967 [2024-07-22 18:17:06.835800] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:56.156  Copying: 48/48 [kB] (average 46 MBps) 00:09:56.156 00:09:56.156 18:17:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:09:56.156 18:17:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:09:56.156 18:17:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:56.156 18:17:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:56.156 { 00:09:56.156 "subsystems": [ 00:09:56.156 { 00:09:56.156 "subsystem": "bdev", 00:09:56.156 "config": [ 00:09:56.156 { 00:09:56.156 "params": { 00:09:56.156 "trtype": "pcie", 00:09:56.156 "traddr": "0000:00:10.0", 00:09:56.156 "name": "Nvme0" 00:09:56.156 }, 00:09:56.156 "method": "bdev_nvme_attach_controller" 00:09:56.156 }, 00:09:56.156 { 00:09:56.156 "method": "bdev_wait_for_examine" 00:09:56.156 } 00:09:56.156 ] 00:09:56.156 } 00:09:56.156 ] 00:09:56.156 } 00:09:56.156 [2024-07-22 18:17:08.172338] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:56.156 [2024-07-22 18:17:08.172531] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65038 ] 00:09:56.414 [2024-07-22 18:17:08.345366] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.672 [2024-07-22 18:17:08.603032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.930 [2024-07-22 18:17:08.808841] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:58.560  Copying: 48/48 [kB] (average 46 MBps) 00:09:58.560 00:09:58.560 18:17:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:58.560 18:17:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:09:58.560 18:17:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:09:58.560 18:17:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:09:58.560 18:17:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:09:58.560 18:17:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:09:58.560 18:17:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:09:58.560 18:17:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:09:58.560 18:17:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:09:58.560 18:17:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:58.560 18:17:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:58.560 { 00:09:58.560 "subsystems": [ 00:09:58.560 { 00:09:58.561 "subsystem": "bdev", 00:09:58.561 "config": [ 00:09:58.561 { 00:09:58.561 "params": { 00:09:58.561 "trtype": "pcie", 00:09:58.561 "traddr": "0000:00:10.0", 00:09:58.561 "name": "Nvme0" 00:09:58.561 }, 00:09:58.561 "method": "bdev_nvme_attach_controller" 00:09:58.561 }, 00:09:58.561 { 00:09:58.561 "method": "bdev_wait_for_examine" 00:09:58.561 } 00:09:58.561 ] 00:09:58.561 } 00:09:58.561 ] 00:09:58.561 } 00:09:58.561 [2024-07-22 18:17:10.380010] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:09:58.561 [2024-07-22 18:17:10.380407] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65071 ] 00:09:58.561 [2024-07-22 18:17:10.546138] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.819 [2024-07-22 18:17:10.790494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.077 [2024-07-22 18:17:10.998822] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:00.268  Copying: 1024/1024 [kB] (average 500 MBps) 00:10:00.268 00:10:00.268 00:10:00.268 real 0m40.702s 00:10:00.268 user 0m34.278s 00:10:00.268 sys 0m17.226s 00:10:00.268 18:17:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:00.268 ************************************ 00:10:00.268 END TEST dd_rw 00:10:00.268 ************************************ 00:10:00.268 18:17:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:10:00.268 18:17:12 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:10:00.268 18:17:12 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:10:00.268 18:17:12 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:00.268 18:17:12 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:00.268 18:17:12 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:10:00.268 ************************************ 00:10:00.268 START TEST dd_rw_offset 00:10:00.268 ************************************ 00:10:00.268 18:17:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1123 -- # basic_offset 00:10:00.268 18:17:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:10:00.268 18:17:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:10:00.268 18:17:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:10:00.268 18:17:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:10:00.268 18:17:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:10:00.269 18:17:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=d4czgz4gmcovaoqb2t6impxkn0kzrzziljdbyoo68u7c9dh38sfp98apjucez0y9brwp8n1sc3w6f97uvtxa8x5iy6weasheqhmtjbk8s70kjqqzo87c0f5w3a76b8a0fje4pttpmqg0at4t79myxancyukup6t4vktcfczjfkqilce87dhrr9nittodutbqoryjn8o7ldfw67muruipx3pw7m2416u56mv3v747bbvoviq3g73mgxd2oef17569rq8bll80h1h63d2on5u8reb4hsujcb1t6tbyh8k819lg8mxwa9lso7bsd7p4hwvelldh57bikrw5ubsttiw2savyb7q7qh30i1pvxrf67o4fvpt3otmkg8s96vjyjvytvajrwty719lsi7t50tctilbfemkfo78awaug1fo8kqyqm581d8ptynly3ifplevdssavk3mgrl607p5qivub2kyg91jara9tq3ka4szp4lomtp4owe3k716y2nkq6q11aecx4y9b7kbclmsx5sai6ychq0v6dbccdf307ydpu80rtxx5xklgq4sbbj6h4ygu8gfoepq5v0n0rct6g753pa3e1gxlqsn4863lbcdjov5z0532m48ql1eyu47cffhc9efw11l7mdv4plz4o9ba70fxxdmp7znw8ccx90urxtd675uspkt4xh9dnhfl6ksqfe8jlxpkthc2kiksa56l4fzzgnu4p15jm0frwny16n2ikjl8tve38bzu2vl0entldr1ta69xon0v3scfhkwqhe0z4yy50fv9zaseas28kol4v5q7rv538br05s0pz0e975jdjah3n6t7rabqua2sg8pjlim60e5e749vvfu0l4vw1swni8mrqew2spd3tq2w012y71gbzogzk38ijptemu5e5i0rk6tmr9en6ldbldccxy3jgz5mlbuo48178jt8rwid3zrsixxlcgdbcrem0e5fqj8dhn3kbyu7dkajaxkej5y2km5l51c3rhanza09b7yabv05ldb1jau9j9c6mjl68trtaivhz1t618u22gpcr3yas2ozd8moug0f4wo3s4b4ls4tvao2sh0ujh3coh0uurxzlb8qehszgtdlrtwu0area8t7w12l3ixsf913mx35xocrb3ostpcwbqhnit57w8g89em165wvpbfohl9ao80xaezakd60lmxo7q5pbe6lqgpu5834uahdez6ai5f5231tj84d2i768p163jxnly75hg70t13q4hbv3uz79lg2vpsip80dlgah9t38ghg0ueiluhfjvacmvkl38y4mkz77nkq7cm5rfz1tz21496qhe9sr4wxd4jdcm7bdllet37gm9ov44vxp63cs6zjrtnc1pfwv2vtx79u0eesy6xlyzrzul5irwnmjnhp01rosqnvh1fm1ge1l1du3l1f2ltjx9qx19pcner1vp0suexn2uj4qufnty361q1rh84km5j03anqw54fwufxr679ioq6dl654gzmhqunsp62qa5wzf2b3r5fs25qh97wmlt9u1v62mrf1w29fusw8qd8t6m6hs3pcviatzbkb5lyo7dhikp3ubmhoqoqjmbrmmvk1dh2yuqgv66oh9obtnp5nuho39v10bgjhbxd8rv9ed0fg0bjigzdgdcmpfn4q8ev5cflkupgz9altz00mplaf7elrkrxz2wstnwbpcu9haimufjq379rs2ej6egvlgqowr1uif3952j1k2f72gajmtf06tzkkl0w1bhgr56b4z4z9sk1kw9jdsz9dmxm4dqe3vsc9e34lln56id4z4r3r2rn8mntqur117ln0pr61qyutnhn98al4luoyn2wo335z8vb8a0w0ugskvu9kqylb2t10cpa8t3am9svnq3s2qd6ewk9ey15hfsc00zla22edz0kkhg4eq768w94sfu9n72sg45n4enh2jwtb30rqvkpgd91kl4s3uqjljpmrq778h9xknf752q4jeqtt1hgtvzyxjgeoxxf6993gn0o594xamitnlkcnz3z93ie5pdb6lx008e82wsv6j41dpw54p1wp0ud9ts7598f0pgpgmyp8blrem94dtndvgzgc7cporxg25rimspsnpqph2j1zjmcad3jqyjh1pypkm395kggswhui19mabhmg78dzvktabrokhe5y3szks996bb4dq22cjwxu0aasv8i2dlgk0xpp520etqlfct3zmhb58m31eyawvwp7a9dbk64fuuuransze4dpb4uwtlr0ginmhu2uom5bgwqdypi7z8zbyqb159i7uvwqrvsl2tt84a31hr20xqimr0mvke2f1k77u23vq5ekv5aenu1c1owzolbvsg9nju2du61xrhmjtboc7mx8eliwflpxppd8vmxmuma9nean6tukn2m24pey92uaafm6tljbb6f3ytqxpekfeg18zqac9erom6tcqir2cnv1pq3h0f7gn6hdpxkkyi2r6vw1cujkkvr9qdba4h3q921mrmy7y90c6t227b1mnjduqz26yjf7g91ikdl2gjf6k0pqxyk6p5yj2jgz8fhzlq0pat6bdaaq52j72sqo25yrahip2t36y8xcwuz7dbg7tiu4geuvb706a8k3m6652rc29drxji19e8o3xfgc58rldpmwnfexyvwskrtmto0z5q3jslcytz3i4gp25bijjc7d5ksqcupye489d5nca4s2tl5v3w3nz6d6mntyvlwaox29hef7wvbswydnh4tg94a3279l7cmah9pqq11l68765ojkuyhtz6rr20rhaas7u7hg3ezo3egk5om7shch21l9lrlm74qxrls1anc6tqidwh8gte638fp92uj9ogwhyc54cwwv1t551x8mtbmztqde1ntg5uqec9lx7rniuq2j7s0kia7l3cp50e0d651fkx677eftc4wq79jg0l1lear9qm5c0bhqymdaefp829lrh937tpi05dqvhc0ulo2jne2m2xq7kpvt3ammehy4hi8n4a8gfl5c5z3aj6rr1xcyjzh5eeik0rfal5awlsggsvdvx7edaw3qjhppaurnmhrcbfm0nlowytps4g4agn2y6wrqahpo7cbvw10dfzn7l143j1795jxl7senuwb40vtfn8qjykmvbv1c8b3f2effeg4372cljzt0lardo5k985w7tgxgoora1u38wig2t2yewo183ys14xbqaabej2i13dt8a6mvwpe3mfih1e13h3kj7eonujs8ogk1mqbhtfrvuiledg6t0fh5p358m0144bpn512g57fsf8zfnjew4342ty61efvd781kk9n2paojxwmpopw87iwev7tfp7e0thdykmgusmi89ujh7b4ythz886eaiatgryj5eya5ec3qvnzt6jkghgpxqrhrfhlemibzdcyfarc50waoqnjh72prgnigipmt4e8rwxnkkdw0g6nxqw0nfbgzwekbl8lbvvgmv5gnvmh7dikq93twhoib4ug5yy5tchrqy46gw30g7b81t1u7ncpdujw3yn4kwsuwjyxqzf4ykaploevr395y6cl7prjq4yhxrxylbz1wtpji5kg9i25ol68tcmk1fntqk470wrxdny139kky1stfzbu9eop59vzebv8svdnply0qufuz852eemr29qlmwto7asvdjnl97srfejpv54blnvuldac8vpmjgwyck294m7mmq0qtcrdwbpnhr6t28isl5dq70117nvm3dbgpfln5lw2n7uknmlh2yg39t0kk9fw3in9fnt8z91o6mk9dh2x7vrtffrn69z3mvyhd30lwna7pqkowuqw5k2si7u5wualmvegumritjus4ngtnu32sogbndrdse684drgdxwqe5vwu96v6axt5rrwptst9t60m36t37h1isc63i7scaz7jbstkxykegv2uwxoctuemfqpbvdem30sihoajfl6ti4t4okmceqrh2qtgfy43atn1p959pmtvu0nnxxglf0ykqcakegk940m6xlq41h5l9r141t7fq390wfwn 00:10:00.269 18:17:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:10:00.269 18:17:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:10:00.269 18:17:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:10:00.269 18:17:12 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:10:00.527 { 00:10:00.527 "subsystems": [ 00:10:00.527 { 00:10:00.527 "subsystem": "bdev", 00:10:00.527 "config": [ 00:10:00.527 { 00:10:00.527 "params": { 00:10:00.527 "trtype": "pcie", 00:10:00.527 "traddr": "0000:00:10.0", 00:10:00.527 "name": "Nvme0" 00:10:00.527 }, 00:10:00.527 "method": "bdev_nvme_attach_controller" 00:10:00.527 }, 00:10:00.527 { 00:10:00.527 "method": "bdev_wait_for_examine" 00:10:00.527 } 00:10:00.527 ] 00:10:00.527 } 00:10:00.527 ] 00:10:00.527 } 00:10:00.527 [2024-07-22 18:17:12.373181] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:00.527 [2024-07-22 18:17:12.373364] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65115 ] 00:10:00.527 [2024-07-22 18:17:12.542005] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.785 [2024-07-22 18:17:12.793929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.043 [2024-07-22 18:17:13.003354] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:02.736  Copying: 4096/4096 [B] (average 4000 kBps) 00:10:02.736 00:10:02.736 18:17:14 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:10:02.736 18:17:14 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:10:02.736 18:17:14 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:10:02.736 18:17:14 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:10:02.736 { 00:10:02.736 "subsystems": [ 00:10:02.736 { 00:10:02.736 "subsystem": "bdev", 00:10:02.736 "config": [ 00:10:02.736 { 00:10:02.736 "params": { 00:10:02.736 "trtype": "pcie", 00:10:02.736 "traddr": "0000:00:10.0", 00:10:02.736 "name": "Nvme0" 00:10:02.736 }, 00:10:02.736 "method": "bdev_nvme_attach_controller" 00:10:02.736 }, 00:10:02.736 { 00:10:02.736 "method": "bdev_wait_for_examine" 00:10:02.736 } 00:10:02.736 ] 00:10:02.736 } 00:10:02.736 ] 00:10:02.736 } 00:10:02.736 [2024-07-22 18:17:14.526689] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:02.736 [2024-07-22 18:17:14.526880] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65147 ] 00:10:02.736 [2024-07-22 18:17:14.703531] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.303 [2024-07-22 18:17:15.014557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.303 [2024-07-22 18:17:15.224719] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:04.496  Copying: 4096/4096 [B] (average 4000 kBps) 00:10:04.496 00:10:04.496 18:17:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:10:04.496 ************************************ 00:10:04.496 END TEST dd_rw_offset 00:10:04.496 ************************************ 00:10:04.497 18:17:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ d4czgz4gmcovaoqb2t6impxkn0kzrzziljdbyoo68u7c9dh38sfp98apjucez0y9brwp8n1sc3w6f97uvtxa8x5iy6weasheqhmtjbk8s70kjqqzo87c0f5w3a76b8a0fje4pttpmqg0at4t79myxancyukup6t4vktcfczjfkqilce87dhrr9nittodutbqoryjn8o7ldfw67muruipx3pw7m2416u56mv3v747bbvoviq3g73mgxd2oef17569rq8bll80h1h63d2on5u8reb4hsujcb1t6tbyh8k819lg8mxwa9lso7bsd7p4hwvelldh57bikrw5ubsttiw2savyb7q7qh30i1pvxrf67o4fvpt3otmkg8s96vjyjvytvajrwty719lsi7t50tctilbfemkfo78awaug1fo8kqyqm581d8ptynly3ifplevdssavk3mgrl607p5qivub2kyg91jara9tq3ka4szp4lomtp4owe3k716y2nkq6q11aecx4y9b7kbclmsx5sai6ychq0v6dbccdf307ydpu80rtxx5xklgq4sbbj6h4ygu8gfoepq5v0n0rct6g753pa3e1gxlqsn4863lbcdjov5z0532m48ql1eyu47cffhc9efw11l7mdv4plz4o9ba70fxxdmp7znw8ccx90urxtd675uspkt4xh9dnhfl6ksqfe8jlxpkthc2kiksa56l4fzzgnu4p15jm0frwny16n2ikjl8tve38bzu2vl0entldr1ta69xon0v3scfhkwqhe0z4yy50fv9zaseas28kol4v5q7rv538br05s0pz0e975jdjah3n6t7rabqua2sg8pjlim60e5e749vvfu0l4vw1swni8mrqew2spd3tq2w012y71gbzogzk38ijptemu5e5i0rk6tmr9en6ldbldccxy3jgz5mlbuo48178jt8rwid3zrsixxlcgdbcrem0e5fqj8dhn3kbyu7dkajaxkej5y2km5l51c3rhanza09b7yabv05ldb1jau9j9c6mjl68trtaivhz1t618u22gpcr3yas2ozd8moug0f4wo3s4b4ls4tvao2sh0ujh3coh0uurxzlb8qehszgtdlrtwu0area8t7w12l3ixsf913mx35xocrb3ostpcwbqhnit57w8g89em165wvpbfohl9ao80xaezakd60lmxo7q5pbe6lqgpu5834uahdez6ai5f5231tj84d2i768p163jxnly75hg70t13q4hbv3uz79lg2vpsip80dlgah9t38ghg0ueiluhfjvacmvkl38y4mkz77nkq7cm5rfz1tz21496qhe9sr4wxd4jdcm7bdllet37gm9ov44vxp63cs6zjrtnc1pfwv2vtx79u0eesy6xlyzrzul5irwnmjnhp01rosqnvh1fm1ge1l1du3l1f2ltjx9qx19pcner1vp0suexn2uj4qufnty361q1rh84km5j03anqw54fwufxr679ioq6dl654gzmhqunsp62qa5wzf2b3r5fs25qh97wmlt9u1v62mrf1w29fusw8qd8t6m6hs3pcviatzbkb5lyo7dhikp3ubmhoqoqjmbrmmvk1dh2yuqgv66oh9obtnp5nuho39v10bgjhbxd8rv9ed0fg0bjigzdgdcmpfn4q8ev5cflkupgz9altz00mplaf7elrkrxz2wstnwbpcu9haimufjq379rs2ej6egvlgqowr1uif3952j1k2f72gajmtf06tzkkl0w1bhgr56b4z4z9sk1kw9jdsz9dmxm4dqe3vsc9e34lln56id4z4r3r2rn8mntqur117ln0pr61qyutnhn98al4luoyn2wo335z8vb8a0w0ugskvu9kqylb2t10cpa8t3am9svnq3s2qd6ewk9ey15hfsc00zla22edz0kkhg4eq768w94sfu9n72sg45n4enh2jwtb30rqvkpgd91kl4s3uqjljpmrq778h9xknf752q4jeqtt1hgtvzyxjgeoxxf6993gn0o594xamitnlkcnz3z93ie5pdb6lx008e82wsv6j41dpw54p1wp0ud9ts7598f0pgpgmyp8blrem94dtndvgzgc7cporxg25rimspsnpqph2j1zjmcad3jqyjh1pypkm395kggswhui19mabhmg78dzvktabrokhe5y3szks996bb4dq22cjwxu0aasv8i2dlgk0xpp520etqlfct3zmhb58m31eyawvwp7a9dbk64fuuuransze4dpb4uwtlr0ginmhu2uom5bgwqdypi7z8zbyqb159i7uvwqrvsl2tt84a31hr20xqimr0mvke2f1k77u23vq5ekv5aenu1c1owzolbvsg9nju2du61xrhmjtboc7mx8eliwflpxppd8vmxmuma9nean6tukn2m24pey92uaafm6tljbb6f3ytqxpekfeg18zqac9erom6tcqir2cnv1pq3h0f7gn6hdpxkkyi2r6vw1cujkkvr9qdba4h3q921mrmy7y90c6t227b1mnjduqz26yjf7g91ikdl2gjf6k0pqxyk6p5yj2jgz8fhzlq0pat6bdaaq52j72sqo25yrahip2t36y8xcwuz7dbg7tiu4geuvb706a8k3m6652rc29drxji19e8o3xfgc58rldpmwnfexyvwskrtmto0z5q3jslcytz3i4gp25bijjc7d5ksqcupye489d5nca4s2tl5v3w3nz6d6mntyvlwaox29hef7wvbswydnh4tg94a3279l7cmah9pqq11l68765ojkuyhtz6rr20rhaas7u7hg3ezo3egk5om7shch21l9lrlm74qxrls1anc6tqidwh8gte638fp92uj9ogwhyc54cwwv1t551x8mtbmztqde1ntg5uqec9lx7rniuq2j7s0kia7l3cp50e0d651fkx677eftc4wq79jg0l1lear9qm5c0bhqymdaefp829lrh937tpi05dqvhc0ulo2jne2m2xq7kpvt3ammehy4hi8n4a8gfl5c5z3aj6rr1xcyjzh5eeik0rfal5awlsggsvdvx7edaw3qjhppaurnmhrcbfm0nlowytps4g4agn2y6wrqahpo7cbvw10dfzn7l143j1795jxl7senuwb40vtfn8qjykmvbv1c8b3f2effeg4372cljzt0lardo5k985w7tgxgoora1u38wig2t2yewo183ys14xbqaabej2i13dt8a6mvwpe3mfih1e13h3kj7eonujs8ogk1mqbhtfrvuiledg6t0fh5p358m0144bpn512g57fsf8zfnjew4342ty61efvd781kk9n2paojxwmpopw87iwev7tfp7e0thdykmgusmi89ujh7b4ythz886eaiatgryj5eya5ec3qvnzt6jkghgpxqrhrfhlemibzdcyfarc50waoqnjh72prgnigipmt4e8rwxnkkdw0g6nxqw0nfbgzwekbl8lbvvgmv5gnvmh7dikq93twhoib4ug5yy5tchrqy46gw30g7b81t1u7ncpdujw3yn4kwsuwjyxqzf4ykaploevr395y6cl7prjq4yhxrxylbz1wtpji5kg9i25ol68tcmk1fntqk470wrxdny139kky1stfzbu9eop59vzebv8svdnply0qufuz852eemr29qlmwto7asvdjnl97srfejpv54blnvuldac8vpmjgwyck294m7mmq0qtcrdwbpnhr6t28isl5dq70117nvm3dbgpfln5lw2n7uknmlh2yg39t0kk9fw3in9fnt8z91o6mk9dh2x7vrtffrn69z3mvyhd30lwna7pqkowuqw5k2si7u5wualmvegumritjus4ngtnu32sogbndrdse684drgdxwqe5vwu96v6axt5rrwptst9t60m36t37h1isc63i7scaz7jbstkxykegv2uwxoctuemfqpbvdem30sihoajfl6ti4t4okmceqrh2qtgfy43atn1p959pmtvu0nnxxglf0ykqcakegk940m6xlq41h5l9r141t7fq390wfwn == \d\4\c\z\g\z\4\g\m\c\o\v\a\o\q\b\2\t\6\i\m\p\x\k\n\0\k\z\r\z\z\i\l\j\d\b\y\o\o\6\8\u\7\c\9\d\h\3\8\s\f\p\9\8\a\p\j\u\c\e\z\0\y\9\b\r\w\p\8\n\1\s\c\3\w\6\f\9\7\u\v\t\x\a\8\x\5\i\y\6\w\e\a\s\h\e\q\h\m\t\j\b\k\8\s\7\0\k\j\q\q\z\o\8\7\c\0\f\5\w\3\a\7\6\b\8\a\0\f\j\e\4\p\t\t\p\m\q\g\0\a\t\4\t\7\9\m\y\x\a\n\c\y\u\k\u\p\6\t\4\v\k\t\c\f\c\z\j\f\k\q\i\l\c\e\8\7\d\h\r\r\9\n\i\t\t\o\d\u\t\b\q\o\r\y\j\n\8\o\7\l\d\f\w\6\7\m\u\r\u\i\p\x\3\p\w\7\m\2\4\1\6\u\5\6\m\v\3\v\7\4\7\b\b\v\o\v\i\q\3\g\7\3\m\g\x\d\2\o\e\f\1\7\5\6\9\r\q\8\b\l\l\8\0\h\1\h\6\3\d\2\o\n\5\u\8\r\e\b\4\h\s\u\j\c\b\1\t\6\t\b\y\h\8\k\8\1\9\l\g\8\m\x\w\a\9\l\s\o\7\b\s\d\7\p\4\h\w\v\e\l\l\d\h\5\7\b\i\k\r\w\5\u\b\s\t\t\i\w\2\s\a\v\y\b\7\q\7\q\h\3\0\i\1\p\v\x\r\f\6\7\o\4\f\v\p\t\3\o\t\m\k\g\8\s\9\6\v\j\y\j\v\y\t\v\a\j\r\w\t\y\7\1\9\l\s\i\7\t\5\0\t\c\t\i\l\b\f\e\m\k\f\o\7\8\a\w\a\u\g\1\f\o\8\k\q\y\q\m\5\8\1\d\8\p\t\y\n\l\y\3\i\f\p\l\e\v\d\s\s\a\v\k\3\m\g\r\l\6\0\7\p\5\q\i\v\u\b\2\k\y\g\9\1\j\a\r\a\9\t\q\3\k\a\4\s\z\p\4\l\o\m\t\p\4\o\w\e\3\k\7\1\6\y\2\n\k\q\6\q\1\1\a\e\c\x\4\y\9\b\7\k\b\c\l\m\s\x\5\s\a\i\6\y\c\h\q\0\v\6\d\b\c\c\d\f\3\0\7\y\d\p\u\8\0\r\t\x\x\5\x\k\l\g\q\4\s\b\b\j\6\h\4\y\g\u\8\g\f\o\e\p\q\5\v\0\n\0\r\c\t\6\g\7\5\3\p\a\3\e\1\g\x\l\q\s\n\4\8\6\3\l\b\c\d\j\o\v\5\z\0\5\3\2\m\4\8\q\l\1\e\y\u\4\7\c\f\f\h\c\9\e\f\w\1\1\l\7\m\d\v\4\p\l\z\4\o\9\b\a\7\0\f\x\x\d\m\p\7\z\n\w\8\c\c\x\9\0\u\r\x\t\d\6\7\5\u\s\p\k\t\4\x\h\9\d\n\h\f\l\6\k\s\q\f\e\8\j\l\x\p\k\t\h\c\2\k\i\k\s\a\5\6\l\4\f\z\z\g\n\u\4\p\1\5\j\m\0\f\r\w\n\y\1\6\n\2\i\k\j\l\8\t\v\e\3\8\b\z\u\2\v\l\0\e\n\t\l\d\r\1\t\a\6\9\x\o\n\0\v\3\s\c\f\h\k\w\q\h\e\0\z\4\y\y\5\0\f\v\9\z\a\s\e\a\s\2\8\k\o\l\4\v\5\q\7\r\v\5\3\8\b\r\0\5\s\0\p\z\0\e\9\7\5\j\d\j\a\h\3\n\6\t\7\r\a\b\q\u\a\2\s\g\8\p\j\l\i\m\6\0\e\5\e\7\4\9\v\v\f\u\0\l\4\v\w\1\s\w\n\i\8\m\r\q\e\w\2\s\p\d\3\t\q\2\w\0\1\2\y\7\1\g\b\z\o\g\z\k\3\8\i\j\p\t\e\m\u\5\e\5\i\0\r\k\6\t\m\r\9\e\n\6\l\d\b\l\d\c\c\x\y\3\j\g\z\5\m\l\b\u\o\4\8\1\7\8\j\t\8\r\w\i\d\3\z\r\s\i\x\x\l\c\g\d\b\c\r\e\m\0\e\5\f\q\j\8\d\h\n\3\k\b\y\u\7\d\k\a\j\a\x\k\e\j\5\y\2\k\m\5\l\5\1\c\3\r\h\a\n\z\a\0\9\b\7\y\a\b\v\0\5\l\d\b\1\j\a\u\9\j\9\c\6\m\j\l\6\8\t\r\t\a\i\v\h\z\1\t\6\1\8\u\2\2\g\p\c\r\3\y\a\s\2\o\z\d\8\m\o\u\g\0\f\4\w\o\3\s\4\b\4\l\s\4\t\v\a\o\2\s\h\0\u\j\h\3\c\o\h\0\u\u\r\x\z\l\b\8\q\e\h\s\z\g\t\d\l\r\t\w\u\0\a\r\e\a\8\t\7\w\1\2\l\3\i\x\s\f\9\1\3\m\x\3\5\x\o\c\r\b\3\o\s\t\p\c\w\b\q\h\n\i\t\5\7\w\8\g\8\9\e\m\1\6\5\w\v\p\b\f\o\h\l\9\a\o\8\0\x\a\e\z\a\k\d\6\0\l\m\x\o\7\q\5\p\b\e\6\l\q\g\p\u\5\8\3\4\u\a\h\d\e\z\6\a\i\5\f\5\2\3\1\t\j\8\4\d\2\i\7\6\8\p\1\6\3\j\x\n\l\y\7\5\h\g\7\0\t\1\3\q\4\h\b\v\3\u\z\7\9\l\g\2\v\p\s\i\p\8\0\d\l\g\a\h\9\t\3\8\g\h\g\0\u\e\i\l\u\h\f\j\v\a\c\m\v\k\l\3\8\y\4\m\k\z\7\7\n\k\q\7\c\m\5\r\f\z\1\t\z\2\1\4\9\6\q\h\e\9\s\r\4\w\x\d\4\j\d\c\m\7\b\d\l\l\e\t\3\7\g\m\9\o\v\4\4\v\x\p\6\3\c\s\6\z\j\r\t\n\c\1\p\f\w\v\2\v\t\x\7\9\u\0\e\e\s\y\6\x\l\y\z\r\z\u\l\5\i\r\w\n\m\j\n\h\p\0\1\r\o\s\q\n\v\h\1\f\m\1\g\e\1\l\1\d\u\3\l\1\f\2\l\t\j\x\9\q\x\1\9\p\c\n\e\r\1\v\p\0\s\u\e\x\n\2\u\j\4\q\u\f\n\t\y\3\6\1\q\1\r\h\8\4\k\m\5\j\0\3\a\n\q\w\5\4\f\w\u\f\x\r\6\7\9\i\o\q\6\d\l\6\5\4\g\z\m\h\q\u\n\s\p\6\2\q\a\5\w\z\f\2\b\3\r\5\f\s\2\5\q\h\9\7\w\m\l\t\9\u\1\v\6\2\m\r\f\1\w\2\9\f\u\s\w\8\q\d\8\t\6\m\6\h\s\3\p\c\v\i\a\t\z\b\k\b\5\l\y\o\7\d\h\i\k\p\3\u\b\m\h\o\q\o\q\j\m\b\r\m\m\v\k\1\d\h\2\y\u\q\g\v\6\6\o\h\9\o\b\t\n\p\5\n\u\h\o\3\9\v\1\0\b\g\j\h\b\x\d\8\r\v\9\e\d\0\f\g\0\b\j\i\g\z\d\g\d\c\m\p\f\n\4\q\8\e\v\5\c\f\l\k\u\p\g\z\9\a\l\t\z\0\0\m\p\l\a\f\7\e\l\r\k\r\x\z\2\w\s\t\n\w\b\p\c\u\9\h\a\i\m\u\f\j\q\3\7\9\r\s\2\e\j\6\e\g\v\l\g\q\o\w\r\1\u\i\f\3\9\5\2\j\1\k\2\f\7\2\g\a\j\m\t\f\0\6\t\z\k\k\l\0\w\1\b\h\g\r\5\6\b\4\z\4\z\9\s\k\1\k\w\9\j\d\s\z\9\d\m\x\m\4\d\q\e\3\v\s\c\9\e\3\4\l\l\n\5\6\i\d\4\z\4\r\3\r\2\r\n\8\m\n\t\q\u\r\1\1\7\l\n\0\p\r\6\1\q\y\u\t\n\h\n\9\8\a\l\4\l\u\o\y\n\2\w\o\3\3\5\z\8\v\b\8\a\0\w\0\u\g\s\k\v\u\9\k\q\y\l\b\2\t\1\0\c\p\a\8\t\3\a\m\9\s\v\n\q\3\s\2\q\d\6\e\w\k\9\e\y\1\5\h\f\s\c\0\0\z\l\a\2\2\e\d\z\0\k\k\h\g\4\e\q\7\6\8\w\9\4\s\f\u\9\n\7\2\s\g\4\5\n\4\e\n\h\2\j\w\t\b\3\0\r\q\v\k\p\g\d\9\1\k\l\4\s\3\u\q\j\l\j\p\m\r\q\7\7\8\h\9\x\k\n\f\7\5\2\q\4\j\e\q\t\t\1\h\g\t\v\z\y\x\j\g\e\o\x\x\f\6\9\9\3\g\n\0\o\5\9\4\x\a\m\i\t\n\l\k\c\n\z\3\z\9\3\i\e\5\p\d\b\6\l\x\0\0\8\e\8\2\w\s\v\6\j\4\1\d\p\w\5\4\p\1\w\p\0\u\d\9\t\s\7\5\9\8\f\0\p\g\p\g\m\y\p\8\b\l\r\e\m\9\4\d\t\n\d\v\g\z\g\c\7\c\p\o\r\x\g\2\5\r\i\m\s\p\s\n\p\q\p\h\2\j\1\z\j\m\c\a\d\3\j\q\y\j\h\1\p\y\p\k\m\3\9\5\k\g\g\s\w\h\u\i\1\9\m\a\b\h\m\g\7\8\d\z\v\k\t\a\b\r\o\k\h\e\5\y\3\s\z\k\s\9\9\6\b\b\4\d\q\2\2\c\j\w\x\u\0\a\a\s\v\8\i\2\d\l\g\k\0\x\p\p\5\2\0\e\t\q\l\f\c\t\3\z\m\h\b\5\8\m\3\1\e\y\a\w\v\w\p\7\a\9\d\b\k\6\4\f\u\u\u\r\a\n\s\z\e\4\d\p\b\4\u\w\t\l\r\0\g\i\n\m\h\u\2\u\o\m\5\b\g\w\q\d\y\p\i\7\z\8\z\b\y\q\b\1\5\9\i\7\u\v\w\q\r\v\s\l\2\t\t\8\4\a\3\1\h\r\2\0\x\q\i\m\r\0\m\v\k\e\2\f\1\k\7\7\u\2\3\v\q\5\e\k\v\5\a\e\n\u\1\c\1\o\w\z\o\l\b\v\s\g\9\n\j\u\2\d\u\6\1\x\r\h\m\j\t\b\o\c\7\m\x\8\e\l\i\w\f\l\p\x\p\p\d\8\v\m\x\m\u\m\a\9\n\e\a\n\6\t\u\k\n\2\m\2\4\p\e\y\9\2\u\a\a\f\m\6\t\l\j\b\b\6\f\3\y\t\q\x\p\e\k\f\e\g\1\8\z\q\a\c\9\e\r\o\m\6\t\c\q\i\r\2\c\n\v\1\p\q\3\h\0\f\7\g\n\6\h\d\p\x\k\k\y\i\2\r\6\v\w\1\c\u\j\k\k\v\r\9\q\d\b\a\4\h\3\q\9\2\1\m\r\m\y\7\y\9\0\c\6\t\2\2\7\b\1\m\n\j\d\u\q\z\2\6\y\j\f\7\g\9\1\i\k\d\l\2\g\j\f\6\k\0\p\q\x\y\k\6\p\5\y\j\2\j\g\z\8\f\h\z\l\q\0\p\a\t\6\b\d\a\a\q\5\2\j\7\2\s\q\o\2\5\y\r\a\h\i\p\2\t\3\6\y\8\x\c\w\u\z\7\d\b\g\7\t\i\u\4\g\e\u\v\b\7\0\6\a\8\k\3\m\6\6\5\2\r\c\2\9\d\r\x\j\i\1\9\e\8\o\3\x\f\g\c\5\8\r\l\d\p\m\w\n\f\e\x\y\v\w\s\k\r\t\m\t\o\0\z\5\q\3\j\s\l\c\y\t\z\3\i\4\g\p\2\5\b\i\j\j\c\7\d\5\k\s\q\c\u\p\y\e\4\8\9\d\5\n\c\a\4\s\2\t\l\5\v\3\w\3\n\z\6\d\6\m\n\t\y\v\l\w\a\o\x\2\9\h\e\f\7\w\v\b\s\w\y\d\n\h\4\t\g\9\4\a\3\2\7\9\l\7\c\m\a\h\9\p\q\q\1\1\l\6\8\7\6\5\o\j\k\u\y\h\t\z\6\r\r\2\0\r\h\a\a\s\7\u\7\h\g\3\e\z\o\3\e\g\k\5\o\m\7\s\h\c\h\2\1\l\9\l\r\l\m\7\4\q\x\r\l\s\1\a\n\c\6\t\q\i\d\w\h\8\g\t\e\6\3\8\f\p\9\2\u\j\9\o\g\w\h\y\c\5\4\c\w\w\v\1\t\5\5\1\x\8\m\t\b\m\z\t\q\d\e\1\n\t\g\5\u\q\e\c\9\l\x\7\r\n\i\u\q\2\j\7\s\0\k\i\a\7\l\3\c\p\5\0\e\0\d\6\5\1\f\k\x\6\7\7\e\f\t\c\4\w\q\7\9\j\g\0\l\1\l\e\a\r\9\q\m\5\c\0\b\h\q\y\m\d\a\e\f\p\8\2\9\l\r\h\9\3\7\t\p\i\0\5\d\q\v\h\c\0\u\l\o\2\j\n\e\2\m\2\x\q\7\k\p\v\t\3\a\m\m\e\h\y\4\h\i\8\n\4\a\8\g\f\l\5\c\5\z\3\a\j\6\r\r\1\x\c\y\j\z\h\5\e\e\i\k\0\r\f\a\l\5\a\w\l\s\g\g\s\v\d\v\x\7\e\d\a\w\3\q\j\h\p\p\a\u\r\n\m\h\r\c\b\f\m\0\n\l\o\w\y\t\p\s\4\g\4\a\g\n\2\y\6\w\r\q\a\h\p\o\7\c\b\v\w\1\0\d\f\z\n\7\l\1\4\3\j\1\7\9\5\j\x\l\7\s\e\n\u\w\b\4\0\v\t\f\n\8\q\j\y\k\m\v\b\v\1\c\8\b\3\f\2\e\f\f\e\g\4\3\7\2\c\l\j\z\t\0\l\a\r\d\o\5\k\9\8\5\w\7\t\g\x\g\o\o\r\a\1\u\3\8\w\i\g\2\t\2\y\e\w\o\1\8\3\y\s\1\4\x\b\q\a\a\b\e\j\2\i\1\3\d\t\8\a\6\m\v\w\p\e\3\m\f\i\h\1\e\1\3\h\3\k\j\7\e\o\n\u\j\s\8\o\g\k\1\m\q\b\h\t\f\r\v\u\i\l\e\d\g\6\t\0\f\h\5\p\3\5\8\m\0\1\4\4\b\p\n\5\1\2\g\5\7\f\s\f\8\z\f\n\j\e\w\4\3\4\2\t\y\6\1\e\f\v\d\7\8\1\k\k\9\n\2\p\a\o\j\x\w\m\p\o\p\w\8\7\i\w\e\v\7\t\f\p\7\e\0\t\h\d\y\k\m\g\u\s\m\i\8\9\u\j\h\7\b\4\y\t\h\z\8\8\6\e\a\i\a\t\g\r\y\j\5\e\y\a\5\e\c\3\q\v\n\z\t\6\j\k\g\h\g\p\x\q\r\h\r\f\h\l\e\m\i\b\z\d\c\y\f\a\r\c\5\0\w\a\o\q\n\j\h\7\2\p\r\g\n\i\g\i\p\m\t\4\e\8\r\w\x\n\k\k\d\w\0\g\6\n\x\q\w\0\n\f\b\g\z\w\e\k\b\l\8\l\b\v\v\g\m\v\5\g\n\v\m\h\7\d\i\k\q\9\3\t\w\h\o\i\b\4\u\g\5\y\y\5\t\c\h\r\q\y\4\6\g\w\3\0\g\7\b\8\1\t\1\u\7\n\c\p\d\u\j\w\3\y\n\4\k\w\s\u\w\j\y\x\q\z\f\4\y\k\a\p\l\o\e\v\r\3\9\5\y\6\c\l\7\p\r\j\q\4\y\h\x\r\x\y\l\b\z\1\w\t\p\j\i\5\k\g\9\i\2\5\o\l\6\8\t\c\m\k\1\f\n\t\q\k\4\7\0\w\r\x\d\n\y\1\3\9\k\k\y\1\s\t\f\z\b\u\9\e\o\p\5\9\v\z\e\b\v\8\s\v\d\n\p\l\y\0\q\u\f\u\z\8\5\2\e\e\m\r\2\9\q\l\m\w\t\o\7\a\s\v\d\j\n\l\9\7\s\r\f\e\j\p\v\5\4\b\l\n\v\u\l\d\a\c\8\v\p\m\j\g\w\y\c\k\2\9\4\m\7\m\m\q\0\q\t\c\r\d\w\b\p\n\h\r\6\t\2\8\i\s\l\5\d\q\7\0\1\1\7\n\v\m\3\d\b\g\p\f\l\n\5\l\w\2\n\7\u\k\n\m\l\h\2\y\g\3\9\t\0\k\k\9\f\w\3\i\n\9\f\n\t\8\z\9\1\o\6\m\k\9\d\h\2\x\7\v\r\t\f\f\r\n\6\9\z\3\m\v\y\h\d\3\0\l\w\n\a\7\p\q\k\o\w\u\q\w\5\k\2\s\i\7\u\5\w\u\a\l\m\v\e\g\u\m\r\i\t\j\u\s\4\n\g\t\n\u\3\2\s\o\g\b\n\d\r\d\s\e\6\8\4\d\r\g\d\x\w\q\e\5\v\w\u\9\6\v\6\a\x\t\5\r\r\w\p\t\s\t\9\t\6\0\m\3\6\t\3\7\h\1\i\s\c\6\3\i\7\s\c\a\z\7\j\b\s\t\k\x\y\k\e\g\v\2\u\w\x\o\c\t\u\e\m\f\q\p\b\v\d\e\m\3\0\s\i\h\o\a\j\f\l\6\t\i\4\t\4\o\k\m\c\e\q\r\h\2\q\t\g\f\y\4\3\a\t\n\1\p\9\5\9\p\m\t\v\u\0\n\n\x\x\g\l\f\0\y\k\q\c\a\k\e\g\k\9\4\0\m\6\x\l\q\4\1\h\5\l\9\r\1\4\1\t\7\f\q\3\9\0\w\f\w\n ]] 00:10:04.497 00:10:04.497 real 0m4.179s 00:10:04.497 user 0m3.463s 00:10:04.497 sys 0m1.906s 00:10:04.497 18:17:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:04.497 18:17:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:10:04.497 18:17:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:10:04.497 18:17:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:10:04.497 18:17:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:10:04.497 18:17:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:10:04.497 18:17:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:10:04.497 18:17:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:10:04.497 18:17:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:10:04.497 18:17:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:10:04.497 18:17:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:10:04.497 18:17:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:10:04.497 18:17:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:10:04.497 18:17:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:10:04.497 { 00:10:04.497 "subsystems": [ 00:10:04.497 { 00:10:04.497 "subsystem": "bdev", 00:10:04.497 "config": [ 00:10:04.497 { 00:10:04.497 "params": { 00:10:04.497 "trtype": "pcie", 00:10:04.497 "traddr": "0000:00:10.0", 00:10:04.497 "name": "Nvme0" 00:10:04.497 }, 00:10:04.497 "method": "bdev_nvme_attach_controller" 00:10:04.497 }, 00:10:04.497 { 00:10:04.497 "method": "bdev_wait_for_examine" 00:10:04.497 } 00:10:04.497 ] 00:10:04.497 } 00:10:04.497 ] 00:10:04.497 } 00:10:04.756 [2024-07-22 18:17:16.540349] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:04.756 [2024-07-22 18:17:16.540495] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65194 ] 00:10:04.756 [2024-07-22 18:17:16.704546] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.014 [2024-07-22 18:17:16.950326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.272 [2024-07-22 18:17:17.154683] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:06.905  Copying: 1024/1024 [kB] (average 1000 MBps) 00:10:06.905 00:10:06.905 18:17:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:06.905 00:10:06.905 real 0m49.482s 00:10:06.905 user 0m41.326s 00:10:06.905 sys 0m20.688s 00:10:06.905 ************************************ 00:10:06.905 END TEST spdk_dd_basic_rw 00:10:06.905 ************************************ 00:10:06.905 18:17:18 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:06.905 18:17:18 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:10:06.905 18:17:18 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:10:06.905 18:17:18 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:10:06.905 18:17:18 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:06.905 18:17:18 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:06.905 18:17:18 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:10:06.905 ************************************ 00:10:06.905 START TEST spdk_dd_posix 00:10:06.905 ************************************ 00:10:06.905 18:17:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:10:06.905 * Looking for test storage... 00:10:06.905 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:10:06.905 18:17:18 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:06.905 18:17:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:06.905 18:17:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:06.905 18:17:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:06.905 18:17:18 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.905 18:17:18 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.905 18:17:18 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.905 18:17:18 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:10:06.905 18:17:18 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.905 18:17:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:10:06.905 18:17:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:10:06.905 18:17:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:10:06.905 18:17:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:10:06.905 18:17:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:06.905 18:17:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:06.905 18:17:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:10:06.905 18:17:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:10:06.905 * First test run, liburing in use 00:10:06.905 18:17:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:10:06.905 18:17:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:06.905 18:17:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:06.905 18:17:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:06.905 ************************************ 00:10:06.905 START TEST dd_flag_append 00:10:06.905 ************************************ 00:10:06.905 18:17:18 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1123 -- # append 00:10:06.905 18:17:18 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:10:06.905 18:17:18 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:10:06.905 18:17:18 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:10:06.905 18:17:18 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:10:06.905 18:17:18 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:10:06.905 18:17:18 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=jxsh6rtld7hcd9fmsuzkv2cp88s4daus 00:10:06.905 18:17:18 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:10:06.905 18:17:18 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:10:06.905 18:17:18 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:10:06.905 18:17:18 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=1kvhz4ib5mebsrakccvdkag1m2dkcdyo 00:10:06.905 18:17:18 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s jxsh6rtld7hcd9fmsuzkv2cp88s4daus 00:10:06.905 18:17:18 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s 1kvhz4ib5mebsrakccvdkag1m2dkcdyo 00:10:06.905 18:17:18 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:10:06.905 [2024-07-22 18:17:18.792839] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:06.905 [2024-07-22 18:17:18.793027] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65276 ] 00:10:07.163 [2024-07-22 18:17:18.976681] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.421 [2024-07-22 18:17:19.268662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.679 [2024-07-22 18:17:19.480665] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:09.063  Copying: 32/32 [B] (average 31 kBps) 00:10:09.063 00:10:09.063 ************************************ 00:10:09.063 END TEST dd_flag_append 00:10:09.063 ************************************ 00:10:09.063 18:17:20 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ 1kvhz4ib5mebsrakccvdkag1m2dkcdyojxsh6rtld7hcd9fmsuzkv2cp88s4daus == \1\k\v\h\z\4\i\b\5\m\e\b\s\r\a\k\c\c\v\d\k\a\g\1\m\2\d\k\c\d\y\o\j\x\s\h\6\r\t\l\d\7\h\c\d\9\f\m\s\u\z\k\v\2\c\p\8\8\s\4\d\a\u\s ]] 00:10:09.063 00:10:09.063 real 0m2.097s 00:10:09.063 user 0m1.691s 00:10:09.063 sys 0m1.029s 00:10:09.063 18:17:20 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:09.063 18:17:20 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:10:09.063 18:17:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:10:09.063 18:17:20 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:10:09.063 18:17:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:09.063 18:17:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:09.063 18:17:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:09.063 ************************************ 00:10:09.063 START TEST dd_flag_directory 00:10:09.063 ************************************ 00:10:09.063 18:17:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1123 -- # directory 00:10:09.063 18:17:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:09.063 18:17:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:10:09.063 18:17:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:09.063 18:17:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:09.063 18:17:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:09.063 18:17:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:09.063 18:17:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:09.063 18:17:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:09.063 18:17:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:09.063 18:17:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:09.063 18:17:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:09.063 18:17:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:09.063 [2024-07-22 18:17:20.957324] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:09.063 [2024-07-22 18:17:20.957494] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65322 ] 00:10:09.389 [2024-07-22 18:17:21.121392] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.647 [2024-07-22 18:17:21.389254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.647 [2024-07-22 18:17:21.589647] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:09.904 [2024-07-22 18:17:21.698113] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:10:09.904 [2024-07-22 18:17:21.698183] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:10:09.904 [2024-07-22 18:17:21.698231] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:10.470 [2024-07-22 18:17:22.433168] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:11.036 18:17:22 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:10:11.036 18:17:22 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:11.036 18:17:22 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:10:11.036 18:17:22 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:10:11.036 18:17:22 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:10:11.036 18:17:22 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:11.036 18:17:22 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:10:11.036 18:17:22 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:10:11.036 18:17:22 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:10:11.036 18:17:22 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:11.036 18:17:22 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:11.036 18:17:22 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:11.036 18:17:22 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:11.036 18:17:22 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:11.036 18:17:22 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:11.036 18:17:22 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:11.036 18:17:22 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:11.036 18:17:22 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:10:11.036 [2024-07-22 18:17:23.002036] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:11.036 [2024-07-22 18:17:23.002325] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65349 ] 00:10:11.294 [2024-07-22 18:17:23.177850] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.551 [2024-07-22 18:17:23.421841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.809 [2024-07-22 18:17:23.628155] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:11.809 [2024-07-22 18:17:23.739075] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:10:11.809 [2024-07-22 18:17:23.739152] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:10:11.809 [2024-07-22 18:17:23.739194] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:12.743 [2024-07-22 18:17:24.496014] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:13.001 18:17:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:10:13.001 18:17:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:13.001 18:17:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:10:13.001 18:17:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:10:13.001 18:17:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:10:13.001 18:17:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:13.001 00:10:13.001 real 0m4.113s 00:10:13.001 user 0m3.358s 00:10:13.001 sys 0m0.527s 00:10:13.001 ************************************ 00:10:13.001 END TEST dd_flag_directory 00:10:13.001 ************************************ 00:10:13.001 18:17:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:13.001 18:17:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:10:13.001 18:17:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:10:13.001 18:17:24 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:10:13.001 18:17:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:13.001 18:17:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:13.001 18:17:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:13.001 ************************************ 00:10:13.001 START TEST dd_flag_nofollow 00:10:13.001 ************************************ 00:10:13.001 18:17:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1123 -- # nofollow 00:10:13.001 18:17:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:10:13.001 18:17:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:10:13.001 18:17:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:10:13.001 18:17:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:10:13.001 18:17:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:13.001 18:17:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:10:13.001 18:17:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:13.001 18:17:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:13.001 18:17:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:13.001 18:17:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:13.001 18:17:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:13.001 18:17:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:13.001 18:17:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:13.001 18:17:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:13.002 18:17:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:13.002 18:17:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:13.259 [2024-07-22 18:17:25.109053] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:13.260 [2024-07-22 18:17:25.109269] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65395 ] 00:10:13.519 [2024-07-22 18:17:25.283161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.519 [2024-07-22 18:17:25.513380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.777 [2024-07-22 18:17:25.713347] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:14.035 [2024-07-22 18:17:25.823319] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:10:14.035 [2024-07-22 18:17:25.823374] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:10:14.035 [2024-07-22 18:17:25.823402] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:14.601 [2024-07-22 18:17:26.575635] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:15.166 18:17:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:10:15.166 18:17:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:15.166 18:17:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:10:15.166 18:17:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:10:15.166 18:17:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:10:15.166 18:17:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:15.166 18:17:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:10:15.166 18:17:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:10:15.166 18:17:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:10:15.166 18:17:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:15.166 18:17:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:15.166 18:17:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:15.166 18:17:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:15.166 18:17:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:15.166 18:17:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:15.166 18:17:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:15.166 18:17:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:15.166 18:17:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:10:15.166 [2024-07-22 18:17:27.130205] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:15.166 [2024-07-22 18:17:27.130385] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65422 ] 00:10:15.424 [2024-07-22 18:17:27.298126] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.682 [2024-07-22 18:17:27.582183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.940 [2024-07-22 18:17:27.788581] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:15.940 [2024-07-22 18:17:27.899692] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:10:15.940 [2024-07-22 18:17:27.899753] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:10:15.941 [2024-07-22 18:17:27.899810] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:16.876 [2024-07-22 18:17:28.646283] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:17.135 18:17:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:10:17.135 18:17:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:17.135 18:17:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:10:17.135 18:17:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:10:17.135 18:17:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:10:17.135 18:17:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:17.135 18:17:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:10:17.135 18:17:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:10:17.135 18:17:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:10:17.135 18:17:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:17.393 [2024-07-22 18:17:29.191106] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:17.393 [2024-07-22 18:17:29.191526] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65447 ] 00:10:17.393 [2024-07-22 18:17:29.357524] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.650 [2024-07-22 18:17:29.610857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.908 [2024-07-22 18:17:29.816783] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:19.102  Copying: 512/512 [B] (average 500 kBps) 00:10:19.102 00:10:19.102 18:17:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ mhxoxgzw1qng62wjfa21o4uljko1fuqm1y8wgs3w069zh3kvjw528mc6b5osbmhrdr38hr4ljsfxwgr58qmm8nrgt8svgf12x0xogi02sw2dx1dgiojm6hptk21sco6dn43tgsiatxoiyvu34eqw3ifyfsp04y6rvd6wl9wowaexwaijnuygdm01dqsu00mxr56gb3v7aupno964gri5q77q4gnwr3ovz35t0xpyuygne879yzhsy3n306y037vkayvat7lk35jdpl6qb4mes53ydmwfwcyhh9u0xurr1d1zhkp2q85tebikqo95445y6zten2lwioj3lza7ubzai8j4lag6cdb6qdjd6qq63vlboai35polu1ii4o5m6zqyxs12tr5eh5n19xid3pa0zanmhxtqh2p9v4sdxok7q8e2jr1c2ig0m59nk729e2vk2r9sdr59t2mocq39xkaacjbg1jgini2au9c8qwwc7dxzwqpwdd4oo6mluflt93l4 == \m\h\x\o\x\g\z\w\1\q\n\g\6\2\w\j\f\a\2\1\o\4\u\l\j\k\o\1\f\u\q\m\1\y\8\w\g\s\3\w\0\6\9\z\h\3\k\v\j\w\5\2\8\m\c\6\b\5\o\s\b\m\h\r\d\r\3\8\h\r\4\l\j\s\f\x\w\g\r\5\8\q\m\m\8\n\r\g\t\8\s\v\g\f\1\2\x\0\x\o\g\i\0\2\s\w\2\d\x\1\d\g\i\o\j\m\6\h\p\t\k\2\1\s\c\o\6\d\n\4\3\t\g\s\i\a\t\x\o\i\y\v\u\3\4\e\q\w\3\i\f\y\f\s\p\0\4\y\6\r\v\d\6\w\l\9\w\o\w\a\e\x\w\a\i\j\n\u\y\g\d\m\0\1\d\q\s\u\0\0\m\x\r\5\6\g\b\3\v\7\a\u\p\n\o\9\6\4\g\r\i\5\q\7\7\q\4\g\n\w\r\3\o\v\z\3\5\t\0\x\p\y\u\y\g\n\e\8\7\9\y\z\h\s\y\3\n\3\0\6\y\0\3\7\v\k\a\y\v\a\t\7\l\k\3\5\j\d\p\l\6\q\b\4\m\e\s\5\3\y\d\m\w\f\w\c\y\h\h\9\u\0\x\u\r\r\1\d\1\z\h\k\p\2\q\8\5\t\e\b\i\k\q\o\9\5\4\4\5\y\6\z\t\e\n\2\l\w\i\o\j\3\l\z\a\7\u\b\z\a\i\8\j\4\l\a\g\6\c\d\b\6\q\d\j\d\6\q\q\6\3\v\l\b\o\a\i\3\5\p\o\l\u\1\i\i\4\o\5\m\6\z\q\y\x\s\1\2\t\r\5\e\h\5\n\1\9\x\i\d\3\p\a\0\z\a\n\m\h\x\t\q\h\2\p\9\v\4\s\d\x\o\k\7\q\8\e\2\j\r\1\c\2\i\g\0\m\5\9\n\k\7\2\9\e\2\v\k\2\r\9\s\d\r\5\9\t\2\m\o\c\q\3\9\x\k\a\a\c\j\b\g\1\j\g\i\n\i\2\a\u\9\c\8\q\w\w\c\7\d\x\z\w\q\p\w\d\d\4\o\o\6\m\l\u\f\l\t\9\3\l\4 ]] 00:10:19.102 00:10:19.102 real 0m6.129s 00:10:19.102 user 0m5.006s 00:10:19.102 sys 0m1.520s 00:10:19.102 18:17:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:19.102 18:17:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:10:19.102 ************************************ 00:10:19.102 END TEST dd_flag_nofollow 00:10:19.102 ************************************ 00:10:19.361 18:17:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:10:19.361 18:17:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:10:19.361 18:17:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:19.361 18:17:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:19.361 18:17:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:19.361 ************************************ 00:10:19.361 START TEST dd_flag_noatime 00:10:19.361 ************************************ 00:10:19.361 18:17:31 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1123 -- # noatime 00:10:19.361 18:17:31 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:10:19.361 18:17:31 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:10:19.361 18:17:31 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:10:19.361 18:17:31 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:10:19.361 18:17:31 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:10:19.361 18:17:31 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:19.361 18:17:31 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1721672249 00:10:19.361 18:17:31 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:19.361 18:17:31 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1721672251 00:10:19.361 18:17:31 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:10:20.346 18:17:32 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:20.346 [2024-07-22 18:17:32.305076] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:20.346 [2024-07-22 18:17:32.305271] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65507 ] 00:10:20.604 [2024-07-22 18:17:32.481373] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.863 [2024-07-22 18:17:32.757404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.121 [2024-07-22 18:17:32.967275] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:22.505  Copying: 512/512 [B] (average 500 kBps) 00:10:22.506 00:10:22.506 18:17:34 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:22.506 18:17:34 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1721672249 )) 00:10:22.506 18:17:34 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:22.506 18:17:34 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1721672251 )) 00:10:22.506 18:17:34 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:22.506 [2024-07-22 18:17:34.374664] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:22.506 [2024-07-22 18:17:34.374818] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65538 ] 00:10:22.765 [2024-07-22 18:17:34.535423] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.023 [2024-07-22 18:17:34.798055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.023 [2024-07-22 18:17:35.030318] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:24.655  Copying: 512/512 [B] (average 500 kBps) 00:10:24.655 00:10:24.655 18:17:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:24.655 18:17:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1721672255 )) 00:10:24.655 00:10:24.655 real 0m5.201s 00:10:24.655 user 0m3.407s 00:10:24.655 sys 0m2.072s 00:10:24.655 18:17:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:24.655 ************************************ 00:10:24.655 END TEST dd_flag_noatime 00:10:24.655 ************************************ 00:10:24.655 18:17:36 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:10:24.655 18:17:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:10:24.655 18:17:36 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:10:24.655 18:17:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:24.655 18:17:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:24.655 18:17:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:24.655 ************************************ 00:10:24.655 START TEST dd_flags_misc 00:10:24.655 ************************************ 00:10:24.655 18:17:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1123 -- # io 00:10:24.655 18:17:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:10:24.655 18:17:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:10:24.655 18:17:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:10:24.655 18:17:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:10:24.655 18:17:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:10:24.655 18:17:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:10:24.655 18:17:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:10:24.655 18:17:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:24.655 18:17:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:10:24.655 [2024-07-22 18:17:36.520842] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:24.655 [2024-07-22 18:17:36.521020] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65584 ] 00:10:24.914 [2024-07-22 18:17:36.688776] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.172 [2024-07-22 18:17:36.932405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.172 [2024-07-22 18:17:37.139774] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:26.806  Copying: 512/512 [B] (average 500 kBps) 00:10:26.806 00:10:26.806 18:17:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ lo29foxxaovojxuvltj08tohyq7waqbpgxst2ccl7lgyohik4o29kf8bkek3mfh3y766rr76kqcpgt925f8lbinlkh4agrxjc3kmawy9hhz7j66y9ps8ybpki7634oeba7ml9lvqkt29otaylg910leb2y3o9i5d0tmso4eivfa6rejqd2lrpebo9muu03ulotjpovrfgg1041om1y75p8p59eoawexrf4u9g6xpb3nzlq2deo7b0e315jfafaz3z2wkvf2soik35n5atwqezcrh2d6aknfdp5yn4a20iu5uluu7yq0j3co8xyk4qhwz7ahfkgr6xgfyff41gn3s9yjax201jxxgxl33u9cx50q0md68gjrpamek8ei9e062l44kkwb31j9jssftz9xqmtq807qwmpirpuiwls22i4qun2oi4bfkut7q4vzniht3idrbuk7tf0rdr6jl0rp84udqqf22e4wici8r4g21waclnrls87xdjk7yvw5qy686 == \l\o\2\9\f\o\x\x\a\o\v\o\j\x\u\v\l\t\j\0\8\t\o\h\y\q\7\w\a\q\b\p\g\x\s\t\2\c\c\l\7\l\g\y\o\h\i\k\4\o\2\9\k\f\8\b\k\e\k\3\m\f\h\3\y\7\6\6\r\r\7\6\k\q\c\p\g\t\9\2\5\f\8\l\b\i\n\l\k\h\4\a\g\r\x\j\c\3\k\m\a\w\y\9\h\h\z\7\j\6\6\y\9\p\s\8\y\b\p\k\i\7\6\3\4\o\e\b\a\7\m\l\9\l\v\q\k\t\2\9\o\t\a\y\l\g\9\1\0\l\e\b\2\y\3\o\9\i\5\d\0\t\m\s\o\4\e\i\v\f\a\6\r\e\j\q\d\2\l\r\p\e\b\o\9\m\u\u\0\3\u\l\o\t\j\p\o\v\r\f\g\g\1\0\4\1\o\m\1\y\7\5\p\8\p\5\9\e\o\a\w\e\x\r\f\4\u\9\g\6\x\p\b\3\n\z\l\q\2\d\e\o\7\b\0\e\3\1\5\j\f\a\f\a\z\3\z\2\w\k\v\f\2\s\o\i\k\3\5\n\5\a\t\w\q\e\z\c\r\h\2\d\6\a\k\n\f\d\p\5\y\n\4\a\2\0\i\u\5\u\l\u\u\7\y\q\0\j\3\c\o\8\x\y\k\4\q\h\w\z\7\a\h\f\k\g\r\6\x\g\f\y\f\f\4\1\g\n\3\s\9\y\j\a\x\2\0\1\j\x\x\g\x\l\3\3\u\9\c\x\5\0\q\0\m\d\6\8\g\j\r\p\a\m\e\k\8\e\i\9\e\0\6\2\l\4\4\k\k\w\b\3\1\j\9\j\s\s\f\t\z\9\x\q\m\t\q\8\0\7\q\w\m\p\i\r\p\u\i\w\l\s\2\2\i\4\q\u\n\2\o\i\4\b\f\k\u\t\7\q\4\v\z\n\i\h\t\3\i\d\r\b\u\k\7\t\f\0\r\d\r\6\j\l\0\r\p\8\4\u\d\q\q\f\2\2\e\4\w\i\c\i\8\r\4\g\2\1\w\a\c\l\n\r\l\s\8\7\x\d\j\k\7\y\v\w\5\q\y\6\8\6 ]] 00:10:26.806 18:17:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:26.806 18:17:38 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:10:26.806 [2024-07-22 18:17:38.546874] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:26.806 [2024-07-22 18:17:38.547061] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65610 ] 00:10:26.806 [2024-07-22 18:17:38.722459] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.064 [2024-07-22 18:17:38.970699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.360 [2024-07-22 18:17:39.178282] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:28.734  Copying: 512/512 [B] (average 500 kBps) 00:10:28.734 00:10:28.734 18:17:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ lo29foxxaovojxuvltj08tohyq7waqbpgxst2ccl7lgyohik4o29kf8bkek3mfh3y766rr76kqcpgt925f8lbinlkh4agrxjc3kmawy9hhz7j66y9ps8ybpki7634oeba7ml9lvqkt29otaylg910leb2y3o9i5d0tmso4eivfa6rejqd2lrpebo9muu03ulotjpovrfgg1041om1y75p8p59eoawexrf4u9g6xpb3nzlq2deo7b0e315jfafaz3z2wkvf2soik35n5atwqezcrh2d6aknfdp5yn4a20iu5uluu7yq0j3co8xyk4qhwz7ahfkgr6xgfyff41gn3s9yjax201jxxgxl33u9cx50q0md68gjrpamek8ei9e062l44kkwb31j9jssftz9xqmtq807qwmpirpuiwls22i4qun2oi4bfkut7q4vzniht3idrbuk7tf0rdr6jl0rp84udqqf22e4wici8r4g21waclnrls87xdjk7yvw5qy686 == \l\o\2\9\f\o\x\x\a\o\v\o\j\x\u\v\l\t\j\0\8\t\o\h\y\q\7\w\a\q\b\p\g\x\s\t\2\c\c\l\7\l\g\y\o\h\i\k\4\o\2\9\k\f\8\b\k\e\k\3\m\f\h\3\y\7\6\6\r\r\7\6\k\q\c\p\g\t\9\2\5\f\8\l\b\i\n\l\k\h\4\a\g\r\x\j\c\3\k\m\a\w\y\9\h\h\z\7\j\6\6\y\9\p\s\8\y\b\p\k\i\7\6\3\4\o\e\b\a\7\m\l\9\l\v\q\k\t\2\9\o\t\a\y\l\g\9\1\0\l\e\b\2\y\3\o\9\i\5\d\0\t\m\s\o\4\e\i\v\f\a\6\r\e\j\q\d\2\l\r\p\e\b\o\9\m\u\u\0\3\u\l\o\t\j\p\o\v\r\f\g\g\1\0\4\1\o\m\1\y\7\5\p\8\p\5\9\e\o\a\w\e\x\r\f\4\u\9\g\6\x\p\b\3\n\z\l\q\2\d\e\o\7\b\0\e\3\1\5\j\f\a\f\a\z\3\z\2\w\k\v\f\2\s\o\i\k\3\5\n\5\a\t\w\q\e\z\c\r\h\2\d\6\a\k\n\f\d\p\5\y\n\4\a\2\0\i\u\5\u\l\u\u\7\y\q\0\j\3\c\o\8\x\y\k\4\q\h\w\z\7\a\h\f\k\g\r\6\x\g\f\y\f\f\4\1\g\n\3\s\9\y\j\a\x\2\0\1\j\x\x\g\x\l\3\3\u\9\c\x\5\0\q\0\m\d\6\8\g\j\r\p\a\m\e\k\8\e\i\9\e\0\6\2\l\4\4\k\k\w\b\3\1\j\9\j\s\s\f\t\z\9\x\q\m\t\q\8\0\7\q\w\m\p\i\r\p\u\i\w\l\s\2\2\i\4\q\u\n\2\o\i\4\b\f\k\u\t\7\q\4\v\z\n\i\h\t\3\i\d\r\b\u\k\7\t\f\0\r\d\r\6\j\l\0\r\p\8\4\u\d\q\q\f\2\2\e\4\w\i\c\i\8\r\4\g\2\1\w\a\c\l\n\r\l\s\8\7\x\d\j\k\7\y\v\w\5\q\y\6\8\6 ]] 00:10:28.734 18:17:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:28.734 18:17:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:10:28.734 [2024-07-22 18:17:40.568595] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:28.734 [2024-07-22 18:17:40.569275] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65633 ] 00:10:28.734 [2024-07-22 18:17:40.734428] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.314 [2024-07-22 18:17:41.020263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.314 [2024-07-22 18:17:41.222561] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:30.524  Copying: 512/512 [B] (average 166 kBps) 00:10:30.524 00:10:30.524 18:17:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ lo29foxxaovojxuvltj08tohyq7waqbpgxst2ccl7lgyohik4o29kf8bkek3mfh3y766rr76kqcpgt925f8lbinlkh4agrxjc3kmawy9hhz7j66y9ps8ybpki7634oeba7ml9lvqkt29otaylg910leb2y3o9i5d0tmso4eivfa6rejqd2lrpebo9muu03ulotjpovrfgg1041om1y75p8p59eoawexrf4u9g6xpb3nzlq2deo7b0e315jfafaz3z2wkvf2soik35n5atwqezcrh2d6aknfdp5yn4a20iu5uluu7yq0j3co8xyk4qhwz7ahfkgr6xgfyff41gn3s9yjax201jxxgxl33u9cx50q0md68gjrpamek8ei9e062l44kkwb31j9jssftz9xqmtq807qwmpirpuiwls22i4qun2oi4bfkut7q4vzniht3idrbuk7tf0rdr6jl0rp84udqqf22e4wici8r4g21waclnrls87xdjk7yvw5qy686 == \l\o\2\9\f\o\x\x\a\o\v\o\j\x\u\v\l\t\j\0\8\t\o\h\y\q\7\w\a\q\b\p\g\x\s\t\2\c\c\l\7\l\g\y\o\h\i\k\4\o\2\9\k\f\8\b\k\e\k\3\m\f\h\3\y\7\6\6\r\r\7\6\k\q\c\p\g\t\9\2\5\f\8\l\b\i\n\l\k\h\4\a\g\r\x\j\c\3\k\m\a\w\y\9\h\h\z\7\j\6\6\y\9\p\s\8\y\b\p\k\i\7\6\3\4\o\e\b\a\7\m\l\9\l\v\q\k\t\2\9\o\t\a\y\l\g\9\1\0\l\e\b\2\y\3\o\9\i\5\d\0\t\m\s\o\4\e\i\v\f\a\6\r\e\j\q\d\2\l\r\p\e\b\o\9\m\u\u\0\3\u\l\o\t\j\p\o\v\r\f\g\g\1\0\4\1\o\m\1\y\7\5\p\8\p\5\9\e\o\a\w\e\x\r\f\4\u\9\g\6\x\p\b\3\n\z\l\q\2\d\e\o\7\b\0\e\3\1\5\j\f\a\f\a\z\3\z\2\w\k\v\f\2\s\o\i\k\3\5\n\5\a\t\w\q\e\z\c\r\h\2\d\6\a\k\n\f\d\p\5\y\n\4\a\2\0\i\u\5\u\l\u\u\7\y\q\0\j\3\c\o\8\x\y\k\4\q\h\w\z\7\a\h\f\k\g\r\6\x\g\f\y\f\f\4\1\g\n\3\s\9\y\j\a\x\2\0\1\j\x\x\g\x\l\3\3\u\9\c\x\5\0\q\0\m\d\6\8\g\j\r\p\a\m\e\k\8\e\i\9\e\0\6\2\l\4\4\k\k\w\b\3\1\j\9\j\s\s\f\t\z\9\x\q\m\t\q\8\0\7\q\w\m\p\i\r\p\u\i\w\l\s\2\2\i\4\q\u\n\2\o\i\4\b\f\k\u\t\7\q\4\v\z\n\i\h\t\3\i\d\r\b\u\k\7\t\f\0\r\d\r\6\j\l\0\r\p\8\4\u\d\q\q\f\2\2\e\4\w\i\c\i\8\r\4\g\2\1\w\a\c\l\n\r\l\s\8\7\x\d\j\k\7\y\v\w\5\q\y\6\8\6 ]] 00:10:30.524 18:17:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:30.524 18:17:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:10:30.782 [2024-07-22 18:17:42.609695] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:30.782 [2024-07-22 18:17:42.609857] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65660 ] 00:10:30.782 [2024-07-22 18:17:42.773258] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.039 [2024-07-22 18:17:43.033300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.296 [2024-07-22 18:17:43.234162] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:32.926  Copying: 512/512 [B] (average 250 kBps) 00:10:32.926 00:10:32.926 18:17:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ lo29foxxaovojxuvltj08tohyq7waqbpgxst2ccl7lgyohik4o29kf8bkek3mfh3y766rr76kqcpgt925f8lbinlkh4agrxjc3kmawy9hhz7j66y9ps8ybpki7634oeba7ml9lvqkt29otaylg910leb2y3o9i5d0tmso4eivfa6rejqd2lrpebo9muu03ulotjpovrfgg1041om1y75p8p59eoawexrf4u9g6xpb3nzlq2deo7b0e315jfafaz3z2wkvf2soik35n5atwqezcrh2d6aknfdp5yn4a20iu5uluu7yq0j3co8xyk4qhwz7ahfkgr6xgfyff41gn3s9yjax201jxxgxl33u9cx50q0md68gjrpamek8ei9e062l44kkwb31j9jssftz9xqmtq807qwmpirpuiwls22i4qun2oi4bfkut7q4vzniht3idrbuk7tf0rdr6jl0rp84udqqf22e4wici8r4g21waclnrls87xdjk7yvw5qy686 == \l\o\2\9\f\o\x\x\a\o\v\o\j\x\u\v\l\t\j\0\8\t\o\h\y\q\7\w\a\q\b\p\g\x\s\t\2\c\c\l\7\l\g\y\o\h\i\k\4\o\2\9\k\f\8\b\k\e\k\3\m\f\h\3\y\7\6\6\r\r\7\6\k\q\c\p\g\t\9\2\5\f\8\l\b\i\n\l\k\h\4\a\g\r\x\j\c\3\k\m\a\w\y\9\h\h\z\7\j\6\6\y\9\p\s\8\y\b\p\k\i\7\6\3\4\o\e\b\a\7\m\l\9\l\v\q\k\t\2\9\o\t\a\y\l\g\9\1\0\l\e\b\2\y\3\o\9\i\5\d\0\t\m\s\o\4\e\i\v\f\a\6\r\e\j\q\d\2\l\r\p\e\b\o\9\m\u\u\0\3\u\l\o\t\j\p\o\v\r\f\g\g\1\0\4\1\o\m\1\y\7\5\p\8\p\5\9\e\o\a\w\e\x\r\f\4\u\9\g\6\x\p\b\3\n\z\l\q\2\d\e\o\7\b\0\e\3\1\5\j\f\a\f\a\z\3\z\2\w\k\v\f\2\s\o\i\k\3\5\n\5\a\t\w\q\e\z\c\r\h\2\d\6\a\k\n\f\d\p\5\y\n\4\a\2\0\i\u\5\u\l\u\u\7\y\q\0\j\3\c\o\8\x\y\k\4\q\h\w\z\7\a\h\f\k\g\r\6\x\g\f\y\f\f\4\1\g\n\3\s\9\y\j\a\x\2\0\1\j\x\x\g\x\l\3\3\u\9\c\x\5\0\q\0\m\d\6\8\g\j\r\p\a\m\e\k\8\e\i\9\e\0\6\2\l\4\4\k\k\w\b\3\1\j\9\j\s\s\f\t\z\9\x\q\m\t\q\8\0\7\q\w\m\p\i\r\p\u\i\w\l\s\2\2\i\4\q\u\n\2\o\i\4\b\f\k\u\t\7\q\4\v\z\n\i\h\t\3\i\d\r\b\u\k\7\t\f\0\r\d\r\6\j\l\0\r\p\8\4\u\d\q\q\f\2\2\e\4\w\i\c\i\8\r\4\g\2\1\w\a\c\l\n\r\l\s\8\7\x\d\j\k\7\y\v\w\5\q\y\6\8\6 ]] 00:10:32.926 18:17:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:10:32.926 18:17:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:10:32.926 18:17:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:10:32.926 18:17:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:10:32.926 18:17:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:32.926 18:17:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:10:32.926 [2024-07-22 18:17:44.633409] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:32.926 [2024-07-22 18:17:44.633575] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65687 ] 00:10:32.926 [2024-07-22 18:17:44.796363] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.184 [2024-07-22 18:17:45.036699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.497 [2024-07-22 18:17:45.240945] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:34.917  Copying: 512/512 [B] (average 500 kBps) 00:10:34.917 00:10:34.917 18:17:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ mui133nzylamb23r7l724wpsepmcdz6a4s4klixjc4t6f7dera2toh2dvdbb9qd9s5ucj00wfngpwqd2smsdpcs1z73djf3roqfvpuiamoarcsh7mra6eznjpu3tylmooeyxe3wsiyqiaq8iwukjgosejox1pbwn76w1w9fhx53twz3qlv54azg0b4mlvimoxx459nmxwmq9we2ew4a08o8ku57yvalfevsp6znnqrizeue1npie7l5u15wxou4ebwsfm1bfg4xzlxeqlsnavmusnrnteu5ag3ax5fnjh52tbe3a4a9bbq9opyz4x00nr8ot6lckrgse2kec9ayb1cui2zs18u3c1awi2xk754hqr0khl76d8c2et39uzx5scnzwlafi31jfpflkd5yvg9r7o8jcpxrx7kdlbusr21k63e5mqwl9xl9g9qn6uo13v7a0q96zs5t6vqgtozesu1gj2flg83v8ft1p9xz1mwfpq15c2bz5is4upkbz98h7 == \m\u\i\1\3\3\n\z\y\l\a\m\b\2\3\r\7\l\7\2\4\w\p\s\e\p\m\c\d\z\6\a\4\s\4\k\l\i\x\j\c\4\t\6\f\7\d\e\r\a\2\t\o\h\2\d\v\d\b\b\9\q\d\9\s\5\u\c\j\0\0\w\f\n\g\p\w\q\d\2\s\m\s\d\p\c\s\1\z\7\3\d\j\f\3\r\o\q\f\v\p\u\i\a\m\o\a\r\c\s\h\7\m\r\a\6\e\z\n\j\p\u\3\t\y\l\m\o\o\e\y\x\e\3\w\s\i\y\q\i\a\q\8\i\w\u\k\j\g\o\s\e\j\o\x\1\p\b\w\n\7\6\w\1\w\9\f\h\x\5\3\t\w\z\3\q\l\v\5\4\a\z\g\0\b\4\m\l\v\i\m\o\x\x\4\5\9\n\m\x\w\m\q\9\w\e\2\e\w\4\a\0\8\o\8\k\u\5\7\y\v\a\l\f\e\v\s\p\6\z\n\n\q\r\i\z\e\u\e\1\n\p\i\e\7\l\5\u\1\5\w\x\o\u\4\e\b\w\s\f\m\1\b\f\g\4\x\z\l\x\e\q\l\s\n\a\v\m\u\s\n\r\n\t\e\u\5\a\g\3\a\x\5\f\n\j\h\5\2\t\b\e\3\a\4\a\9\b\b\q\9\o\p\y\z\4\x\0\0\n\r\8\o\t\6\l\c\k\r\g\s\e\2\k\e\c\9\a\y\b\1\c\u\i\2\z\s\1\8\u\3\c\1\a\w\i\2\x\k\7\5\4\h\q\r\0\k\h\l\7\6\d\8\c\2\e\t\3\9\u\z\x\5\s\c\n\z\w\l\a\f\i\3\1\j\f\p\f\l\k\d\5\y\v\g\9\r\7\o\8\j\c\p\x\r\x\7\k\d\l\b\u\s\r\2\1\k\6\3\e\5\m\q\w\l\9\x\l\9\g\9\q\n\6\u\o\1\3\v\7\a\0\q\9\6\z\s\5\t\6\v\q\g\t\o\z\e\s\u\1\g\j\2\f\l\g\8\3\v\8\f\t\1\p\9\x\z\1\m\w\f\p\q\1\5\c\2\b\z\5\i\s\4\u\p\k\b\z\9\8\h\7 ]] 00:10:34.917 18:17:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:34.917 18:17:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:10:34.917 [2024-07-22 18:17:46.612584] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:34.917 [2024-07-22 18:17:46.612751] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65714 ] 00:10:34.917 [2024-07-22 18:17:46.777643] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.174 [2024-07-22 18:17:47.064379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.431 [2024-07-22 18:17:47.282510] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:36.820  Copying: 512/512 [B] (average 500 kBps) 00:10:36.820 00:10:36.820 18:17:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ mui133nzylamb23r7l724wpsepmcdz6a4s4klixjc4t6f7dera2toh2dvdbb9qd9s5ucj00wfngpwqd2smsdpcs1z73djf3roqfvpuiamoarcsh7mra6eznjpu3tylmooeyxe3wsiyqiaq8iwukjgosejox1pbwn76w1w9fhx53twz3qlv54azg0b4mlvimoxx459nmxwmq9we2ew4a08o8ku57yvalfevsp6znnqrizeue1npie7l5u15wxou4ebwsfm1bfg4xzlxeqlsnavmusnrnteu5ag3ax5fnjh52tbe3a4a9bbq9opyz4x00nr8ot6lckrgse2kec9ayb1cui2zs18u3c1awi2xk754hqr0khl76d8c2et39uzx5scnzwlafi31jfpflkd5yvg9r7o8jcpxrx7kdlbusr21k63e5mqwl9xl9g9qn6uo13v7a0q96zs5t6vqgtozesu1gj2flg83v8ft1p9xz1mwfpq15c2bz5is4upkbz98h7 == \m\u\i\1\3\3\n\z\y\l\a\m\b\2\3\r\7\l\7\2\4\w\p\s\e\p\m\c\d\z\6\a\4\s\4\k\l\i\x\j\c\4\t\6\f\7\d\e\r\a\2\t\o\h\2\d\v\d\b\b\9\q\d\9\s\5\u\c\j\0\0\w\f\n\g\p\w\q\d\2\s\m\s\d\p\c\s\1\z\7\3\d\j\f\3\r\o\q\f\v\p\u\i\a\m\o\a\r\c\s\h\7\m\r\a\6\e\z\n\j\p\u\3\t\y\l\m\o\o\e\y\x\e\3\w\s\i\y\q\i\a\q\8\i\w\u\k\j\g\o\s\e\j\o\x\1\p\b\w\n\7\6\w\1\w\9\f\h\x\5\3\t\w\z\3\q\l\v\5\4\a\z\g\0\b\4\m\l\v\i\m\o\x\x\4\5\9\n\m\x\w\m\q\9\w\e\2\e\w\4\a\0\8\o\8\k\u\5\7\y\v\a\l\f\e\v\s\p\6\z\n\n\q\r\i\z\e\u\e\1\n\p\i\e\7\l\5\u\1\5\w\x\o\u\4\e\b\w\s\f\m\1\b\f\g\4\x\z\l\x\e\q\l\s\n\a\v\m\u\s\n\r\n\t\e\u\5\a\g\3\a\x\5\f\n\j\h\5\2\t\b\e\3\a\4\a\9\b\b\q\9\o\p\y\z\4\x\0\0\n\r\8\o\t\6\l\c\k\r\g\s\e\2\k\e\c\9\a\y\b\1\c\u\i\2\z\s\1\8\u\3\c\1\a\w\i\2\x\k\7\5\4\h\q\r\0\k\h\l\7\6\d\8\c\2\e\t\3\9\u\z\x\5\s\c\n\z\w\l\a\f\i\3\1\j\f\p\f\l\k\d\5\y\v\g\9\r\7\o\8\j\c\p\x\r\x\7\k\d\l\b\u\s\r\2\1\k\6\3\e\5\m\q\w\l\9\x\l\9\g\9\q\n\6\u\o\1\3\v\7\a\0\q\9\6\z\s\5\t\6\v\q\g\t\o\z\e\s\u\1\g\j\2\f\l\g\8\3\v\8\f\t\1\p\9\x\z\1\m\w\f\p\q\1\5\c\2\b\z\5\i\s\4\u\p\k\b\z\9\8\h\7 ]] 00:10:36.820 18:17:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:36.820 18:17:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:10:36.820 [2024-07-22 18:17:48.673912] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:36.820 [2024-07-22 18:17:48.674086] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65741 ] 00:10:36.820 [2024-07-22 18:17:48.836830] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.078 [2024-07-22 18:17:49.074774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.336 [2024-07-22 18:17:49.277976] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:38.970  Copying: 512/512 [B] (average 250 kBps) 00:10:38.970 00:10:38.970 18:17:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ mui133nzylamb23r7l724wpsepmcdz6a4s4klixjc4t6f7dera2toh2dvdbb9qd9s5ucj00wfngpwqd2smsdpcs1z73djf3roqfvpuiamoarcsh7mra6eznjpu3tylmooeyxe3wsiyqiaq8iwukjgosejox1pbwn76w1w9fhx53twz3qlv54azg0b4mlvimoxx459nmxwmq9we2ew4a08o8ku57yvalfevsp6znnqrizeue1npie7l5u15wxou4ebwsfm1bfg4xzlxeqlsnavmusnrnteu5ag3ax5fnjh52tbe3a4a9bbq9opyz4x00nr8ot6lckrgse2kec9ayb1cui2zs18u3c1awi2xk754hqr0khl76d8c2et39uzx5scnzwlafi31jfpflkd5yvg9r7o8jcpxrx7kdlbusr21k63e5mqwl9xl9g9qn6uo13v7a0q96zs5t6vqgtozesu1gj2flg83v8ft1p9xz1mwfpq15c2bz5is4upkbz98h7 == \m\u\i\1\3\3\n\z\y\l\a\m\b\2\3\r\7\l\7\2\4\w\p\s\e\p\m\c\d\z\6\a\4\s\4\k\l\i\x\j\c\4\t\6\f\7\d\e\r\a\2\t\o\h\2\d\v\d\b\b\9\q\d\9\s\5\u\c\j\0\0\w\f\n\g\p\w\q\d\2\s\m\s\d\p\c\s\1\z\7\3\d\j\f\3\r\o\q\f\v\p\u\i\a\m\o\a\r\c\s\h\7\m\r\a\6\e\z\n\j\p\u\3\t\y\l\m\o\o\e\y\x\e\3\w\s\i\y\q\i\a\q\8\i\w\u\k\j\g\o\s\e\j\o\x\1\p\b\w\n\7\6\w\1\w\9\f\h\x\5\3\t\w\z\3\q\l\v\5\4\a\z\g\0\b\4\m\l\v\i\m\o\x\x\4\5\9\n\m\x\w\m\q\9\w\e\2\e\w\4\a\0\8\o\8\k\u\5\7\y\v\a\l\f\e\v\s\p\6\z\n\n\q\r\i\z\e\u\e\1\n\p\i\e\7\l\5\u\1\5\w\x\o\u\4\e\b\w\s\f\m\1\b\f\g\4\x\z\l\x\e\q\l\s\n\a\v\m\u\s\n\r\n\t\e\u\5\a\g\3\a\x\5\f\n\j\h\5\2\t\b\e\3\a\4\a\9\b\b\q\9\o\p\y\z\4\x\0\0\n\r\8\o\t\6\l\c\k\r\g\s\e\2\k\e\c\9\a\y\b\1\c\u\i\2\z\s\1\8\u\3\c\1\a\w\i\2\x\k\7\5\4\h\q\r\0\k\h\l\7\6\d\8\c\2\e\t\3\9\u\z\x\5\s\c\n\z\w\l\a\f\i\3\1\j\f\p\f\l\k\d\5\y\v\g\9\r\7\o\8\j\c\p\x\r\x\7\k\d\l\b\u\s\r\2\1\k\6\3\e\5\m\q\w\l\9\x\l\9\g\9\q\n\6\u\o\1\3\v\7\a\0\q\9\6\z\s\5\t\6\v\q\g\t\o\z\e\s\u\1\g\j\2\f\l\g\8\3\v\8\f\t\1\p\9\x\z\1\m\w\f\p\q\1\5\c\2\b\z\5\i\s\4\u\p\k\b\z\9\8\h\7 ]] 00:10:38.970 18:17:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:38.970 18:17:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:10:38.970 [2024-07-22 18:17:50.688017] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:38.970 [2024-07-22 18:17:50.688175] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65769 ] 00:10:38.970 [2024-07-22 18:17:50.848944] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.228 [2024-07-22 18:17:51.091478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.487 [2024-07-22 18:17:51.293786] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:40.864  Copying: 512/512 [B] (average 250 kBps) 00:10:40.864 00:10:40.864 ************************************ 00:10:40.864 END TEST dd_flags_misc 00:10:40.865 ************************************ 00:10:40.865 18:17:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ mui133nzylamb23r7l724wpsepmcdz6a4s4klixjc4t6f7dera2toh2dvdbb9qd9s5ucj00wfngpwqd2smsdpcs1z73djf3roqfvpuiamoarcsh7mra6eznjpu3tylmooeyxe3wsiyqiaq8iwukjgosejox1pbwn76w1w9fhx53twz3qlv54azg0b4mlvimoxx459nmxwmq9we2ew4a08o8ku57yvalfevsp6znnqrizeue1npie7l5u15wxou4ebwsfm1bfg4xzlxeqlsnavmusnrnteu5ag3ax5fnjh52tbe3a4a9bbq9opyz4x00nr8ot6lckrgse2kec9ayb1cui2zs18u3c1awi2xk754hqr0khl76d8c2et39uzx5scnzwlafi31jfpflkd5yvg9r7o8jcpxrx7kdlbusr21k63e5mqwl9xl9g9qn6uo13v7a0q96zs5t6vqgtozesu1gj2flg83v8ft1p9xz1mwfpq15c2bz5is4upkbz98h7 == \m\u\i\1\3\3\n\z\y\l\a\m\b\2\3\r\7\l\7\2\4\w\p\s\e\p\m\c\d\z\6\a\4\s\4\k\l\i\x\j\c\4\t\6\f\7\d\e\r\a\2\t\o\h\2\d\v\d\b\b\9\q\d\9\s\5\u\c\j\0\0\w\f\n\g\p\w\q\d\2\s\m\s\d\p\c\s\1\z\7\3\d\j\f\3\r\o\q\f\v\p\u\i\a\m\o\a\r\c\s\h\7\m\r\a\6\e\z\n\j\p\u\3\t\y\l\m\o\o\e\y\x\e\3\w\s\i\y\q\i\a\q\8\i\w\u\k\j\g\o\s\e\j\o\x\1\p\b\w\n\7\6\w\1\w\9\f\h\x\5\3\t\w\z\3\q\l\v\5\4\a\z\g\0\b\4\m\l\v\i\m\o\x\x\4\5\9\n\m\x\w\m\q\9\w\e\2\e\w\4\a\0\8\o\8\k\u\5\7\y\v\a\l\f\e\v\s\p\6\z\n\n\q\r\i\z\e\u\e\1\n\p\i\e\7\l\5\u\1\5\w\x\o\u\4\e\b\w\s\f\m\1\b\f\g\4\x\z\l\x\e\q\l\s\n\a\v\m\u\s\n\r\n\t\e\u\5\a\g\3\a\x\5\f\n\j\h\5\2\t\b\e\3\a\4\a\9\b\b\q\9\o\p\y\z\4\x\0\0\n\r\8\o\t\6\l\c\k\r\g\s\e\2\k\e\c\9\a\y\b\1\c\u\i\2\z\s\1\8\u\3\c\1\a\w\i\2\x\k\7\5\4\h\q\r\0\k\h\l\7\6\d\8\c\2\e\t\3\9\u\z\x\5\s\c\n\z\w\l\a\f\i\3\1\j\f\p\f\l\k\d\5\y\v\g\9\r\7\o\8\j\c\p\x\r\x\7\k\d\l\b\u\s\r\2\1\k\6\3\e\5\m\q\w\l\9\x\l\9\g\9\q\n\6\u\o\1\3\v\7\a\0\q\9\6\z\s\5\t\6\v\q\g\t\o\z\e\s\u\1\g\j\2\f\l\g\8\3\v\8\f\t\1\p\9\x\z\1\m\w\f\p\q\1\5\c\2\b\z\5\i\s\4\u\p\k\b\z\9\8\h\7 ]] 00:10:40.865 00:10:40.865 real 0m16.229s 00:10:40.865 user 0m13.284s 00:10:40.865 sys 0m8.007s 00:10:40.865 18:17:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:40.865 18:17:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:10:40.865 18:17:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:10:40.865 18:17:52 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:10:40.865 18:17:52 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:10:40.865 * Second test run, disabling liburing, forcing AIO 00:10:40.865 18:17:52 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:10:40.865 18:17:52 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:10:40.865 18:17:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:40.865 18:17:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:40.865 18:17:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:40.865 ************************************ 00:10:40.865 START TEST dd_flag_append_forced_aio 00:10:40.865 ************************************ 00:10:40.865 18:17:52 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1123 -- # append 00:10:40.865 18:17:52 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:10:40.865 18:17:52 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:10:40.865 18:17:52 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:10:40.865 18:17:52 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:10:40.865 18:17:52 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:40.865 18:17:52 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=ynddvx7dgljs6eyvi2wp7jv2ltnwkxxv 00:10:40.865 18:17:52 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:10:40.865 18:17:52 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:10:40.865 18:17:52 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:40.865 18:17:52 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=azj5u46232d7wb5zv0s2f064go6egra9 00:10:40.865 18:17:52 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s ynddvx7dgljs6eyvi2wp7jv2ltnwkxxv 00:10:40.865 18:17:52 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s azj5u46232d7wb5zv0s2f064go6egra9 00:10:40.865 18:17:52 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:10:40.865 [2024-07-22 18:17:52.832657] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:40.865 [2024-07-22 18:17:52.832813] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65815 ] 00:10:41.124 [2024-07-22 18:17:52.998168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.382 [2024-07-22 18:17:53.240625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.639 [2024-07-22 18:17:53.443407] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:43.012  Copying: 32/32 [B] (average 31 kBps) 00:10:43.012 00:10:43.012 18:17:54 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ azj5u46232d7wb5zv0s2f064go6egra9ynddvx7dgljs6eyvi2wp7jv2ltnwkxxv == \a\z\j\5\u\4\6\2\3\2\d\7\w\b\5\z\v\0\s\2\f\0\6\4\g\o\6\e\g\r\a\9\y\n\d\d\v\x\7\d\g\l\j\s\6\e\y\v\i\2\w\p\7\j\v\2\l\t\n\w\k\x\x\v ]] 00:10:43.012 00:10:43.012 real 0m2.041s 00:10:43.012 user 0m1.641s 00:10:43.012 sys 0m0.274s 00:10:43.012 18:17:54 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:43.012 18:17:54 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:43.012 ************************************ 00:10:43.012 END TEST dd_flag_append_forced_aio 00:10:43.012 ************************************ 00:10:43.012 18:17:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:10:43.012 18:17:54 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:10:43.012 18:17:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:43.012 18:17:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:43.012 18:17:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:43.012 ************************************ 00:10:43.012 START TEST dd_flag_directory_forced_aio 00:10:43.012 ************************************ 00:10:43.012 18:17:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1123 -- # directory 00:10:43.012 18:17:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:43.012 18:17:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:10:43.012 18:17:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:43.012 18:17:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:43.012 18:17:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:43.012 18:17:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:43.012 18:17:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:43.012 18:17:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:43.012 18:17:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:43.012 18:17:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:43.012 18:17:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:43.012 18:17:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:43.012 [2024-07-22 18:17:54.935605] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:43.012 [2024-07-22 18:17:54.935778] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65858 ] 00:10:43.270 [2024-07-22 18:17:55.102587] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.529 [2024-07-22 18:17:55.353381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.787 [2024-07-22 18:17:55.560575] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:43.787 [2024-07-22 18:17:55.671518] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:10:43.787 [2024-07-22 18:17:55.671592] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:10:43.787 [2024-07-22 18:17:55.671620] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:44.722 [2024-07-22 18:17:56.415085] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:44.980 18:17:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:10:44.980 18:17:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:44.980 18:17:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:10:44.980 18:17:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:10:44.980 18:17:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:10:44.980 18:17:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:44.980 18:17:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:10:44.980 18:17:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:10:44.980 18:17:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:10:44.980 18:17:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:44.980 18:17:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:44.980 18:17:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:44.980 18:17:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:44.980 18:17:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:44.980 18:17:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:44.980 18:17:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:44.980 18:17:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:44.980 18:17:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:10:44.980 [2024-07-22 18:17:56.975651] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:44.980 [2024-07-22 18:17:56.975840] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65881 ] 00:10:45.254 [2024-07-22 18:17:57.153102] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:45.542 [2024-07-22 18:17:57.439247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.800 [2024-07-22 18:17:57.641732] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:45.800 [2024-07-22 18:17:57.751714] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:10:45.800 [2024-07-22 18:17:57.751792] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:10:45.800 [2024-07-22 18:17:57.751820] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:46.734 [2024-07-22 18:17:58.501978] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:46.992 18:17:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:10:46.992 18:17:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:46.992 18:17:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:10:46.992 18:17:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:10:46.992 18:17:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:10:46.992 18:17:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:46.992 00:10:46.992 real 0m4.150s 00:10:46.992 user 0m3.373s 00:10:46.992 sys 0m0.542s 00:10:46.992 18:17:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:46.992 18:17:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:46.992 ************************************ 00:10:46.992 END TEST dd_flag_directory_forced_aio 00:10:46.992 ************************************ 00:10:46.992 18:17:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:10:46.992 18:17:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:10:46.992 18:17:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:46.992 18:17:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:46.992 18:17:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:46.992 ************************************ 00:10:46.992 START TEST dd_flag_nofollow_forced_aio 00:10:46.992 ************************************ 00:10:46.992 18:17:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1123 -- # nofollow 00:10:46.992 18:17:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:10:46.992 18:17:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:10:46.992 18:17:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:10:46.992 18:17:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:10:46.992 18:17:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:46.992 18:17:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:10:46.992 18:17:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:46.992 18:17:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:47.250 18:17:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:47.250 18:17:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:47.250 18:17:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:47.250 18:17:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:47.250 18:17:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:47.250 18:17:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:47.250 18:17:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:47.250 18:17:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:47.250 [2024-07-22 18:17:59.103406] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:47.250 [2024-07-22 18:17:59.103588] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65927 ] 00:10:47.508 [2024-07-22 18:17:59.270891] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.508 [2024-07-22 18:17:59.518402] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.766 [2024-07-22 18:17:59.722828] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:48.023 [2024-07-22 18:17:59.833288] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:10:48.023 [2024-07-22 18:17:59.833358] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:10:48.023 [2024-07-22 18:17:59.833385] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:48.588 [2024-07-22 18:18:00.586274] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:49.154 18:18:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:10:49.154 18:18:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:49.154 18:18:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:10:49.154 18:18:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:10:49.154 18:18:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:10:49.154 18:18:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:49.154 18:18:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:10:49.154 18:18:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:10:49.154 18:18:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:10:49.154 18:18:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:49.154 18:18:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:49.154 18:18:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:49.154 18:18:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:49.154 18:18:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:49.154 18:18:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:49.154 18:18:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:49.154 18:18:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:49.154 18:18:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:10:49.154 [2024-07-22 18:18:01.121980] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:49.154 [2024-07-22 18:18:01.122152] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65954 ] 00:10:49.412 [2024-07-22 18:18:01.285430] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.670 [2024-07-22 18:18:01.525133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.928 [2024-07-22 18:18:01.726949] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:49.928 [2024-07-22 18:18:01.835272] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:10:49.928 [2024-07-22 18:18:01.835344] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:10:49.928 [2024-07-22 18:18:01.835383] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:50.863 [2024-07-22 18:18:02.588526] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:51.121 18:18:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:10:51.121 18:18:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:51.121 18:18:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:10:51.121 18:18:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:10:51.121 18:18:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:10:51.121 18:18:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:51.121 18:18:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:10:51.121 18:18:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:10:51.121 18:18:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:51.121 18:18:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:51.121 [2024-07-22 18:18:03.129009] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:51.121 [2024-07-22 18:18:03.129174] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65979 ] 00:10:51.388 [2024-07-22 18:18:03.293464] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.649 [2024-07-22 18:18:03.531751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.908 [2024-07-22 18:18:03.736013] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:53.284  Copying: 512/512 [B] (average 500 kBps) 00:10:53.284 00:10:53.284 18:18:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ 1gw5c5zkhp0muw494rj51jxzlhu8km19oox2d2bupzkspj23ei6h9xnfes07l8hpb45gf7l4a49euzr963xc6si3x0j4dh4mju3zkavai4jzs02b06lo14obctdn1t03zscivraoo2zqq85lkr4af40ahto8bqqhmz4uframaaszu461v4qgdjma6gc588mr924ehjqts2ks36qqxe0hz6n3rsuaec24hodlwn3x60ji50hpwhgfcuzgmgf8bkhgz0a8toc3ntcpaz6gdpmtqpu5fafk9yo429xkm7w81342edabitbxkqwxhuv74r4h9qe8ikkxpcgbcbho6ityjbevzvtu2osotiq1jtzd90kaxksi7mspbn5ukx7x9hlold5tparsouxa5f6agdr2pxklm1lttf7ayd8apu11bbqbhpv2y5v6wrusqatsu4l6b1lbqyc86ioj7a7kn5h5mpuncznd20sw47lqwbcpqhrctkext19dqea942po3oc0 == \1\g\w\5\c\5\z\k\h\p\0\m\u\w\4\9\4\r\j\5\1\j\x\z\l\h\u\8\k\m\1\9\o\o\x\2\d\2\b\u\p\z\k\s\p\j\2\3\e\i\6\h\9\x\n\f\e\s\0\7\l\8\h\p\b\4\5\g\f\7\l\4\a\4\9\e\u\z\r\9\6\3\x\c\6\s\i\3\x\0\j\4\d\h\4\m\j\u\3\z\k\a\v\a\i\4\j\z\s\0\2\b\0\6\l\o\1\4\o\b\c\t\d\n\1\t\0\3\z\s\c\i\v\r\a\o\o\2\z\q\q\8\5\l\k\r\4\a\f\4\0\a\h\t\o\8\b\q\q\h\m\z\4\u\f\r\a\m\a\a\s\z\u\4\6\1\v\4\q\g\d\j\m\a\6\g\c\5\8\8\m\r\9\2\4\e\h\j\q\t\s\2\k\s\3\6\q\q\x\e\0\h\z\6\n\3\r\s\u\a\e\c\2\4\h\o\d\l\w\n\3\x\6\0\j\i\5\0\h\p\w\h\g\f\c\u\z\g\m\g\f\8\b\k\h\g\z\0\a\8\t\o\c\3\n\t\c\p\a\z\6\g\d\p\m\t\q\p\u\5\f\a\f\k\9\y\o\4\2\9\x\k\m\7\w\8\1\3\4\2\e\d\a\b\i\t\b\x\k\q\w\x\h\u\v\7\4\r\4\h\9\q\e\8\i\k\k\x\p\c\g\b\c\b\h\o\6\i\t\y\j\b\e\v\z\v\t\u\2\o\s\o\t\i\q\1\j\t\z\d\9\0\k\a\x\k\s\i\7\m\s\p\b\n\5\u\k\x\7\x\9\h\l\o\l\d\5\t\p\a\r\s\o\u\x\a\5\f\6\a\g\d\r\2\p\x\k\l\m\1\l\t\t\f\7\a\y\d\8\a\p\u\1\1\b\b\q\b\h\p\v\2\y\5\v\6\w\r\u\s\q\a\t\s\u\4\l\6\b\1\l\b\q\y\c\8\6\i\o\j\7\a\7\k\n\5\h\5\m\p\u\n\c\z\n\d\2\0\s\w\4\7\l\q\w\b\c\p\q\h\r\c\t\k\e\x\t\1\9\d\q\e\a\9\4\2\p\o\3\o\c\0 ]] 00:10:53.284 00:10:53.284 real 0m6.040s 00:10:53.285 user 0m4.938s 00:10:53.285 sys 0m0.741s 00:10:53.285 18:18:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:53.285 18:18:05 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:53.285 ************************************ 00:10:53.285 END TEST dd_flag_nofollow_forced_aio 00:10:53.285 ************************************ 00:10:53.285 18:18:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:10:53.285 18:18:05 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:10:53.285 18:18:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:53.285 18:18:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:53.285 18:18:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:53.285 ************************************ 00:10:53.285 START TEST dd_flag_noatime_forced_aio 00:10:53.285 ************************************ 00:10:53.285 18:18:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1123 -- # noatime 00:10:53.285 18:18:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:10:53.285 18:18:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:10:53.285 18:18:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:10:53.285 18:18:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:10:53.285 18:18:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:53.285 18:18:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:53.285 18:18:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1721672283 00:10:53.285 18:18:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:53.285 18:18:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1721672285 00:10:53.285 18:18:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:10:54.221 18:18:06 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:54.221 [2024-07-22 18:18:06.204230] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:54.221 [2024-07-22 18:18:06.204435] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66037 ] 00:10:54.479 [2024-07-22 18:18:06.369025] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.738 [2024-07-22 18:18:06.630903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.026 [2024-07-22 18:18:06.832839] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:56.402  Copying: 512/512 [B] (average 500 kBps) 00:10:56.402 00:10:56.402 18:18:08 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:56.402 18:18:08 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1721672283 )) 00:10:56.402 18:18:08 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:56.402 18:18:08 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1721672285 )) 00:10:56.402 18:18:08 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:56.402 [2024-07-22 18:18:08.248766] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:56.402 [2024-07-22 18:18:08.248916] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66066 ] 00:10:56.402 [2024-07-22 18:18:08.410922] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:56.659 [2024-07-22 18:18:08.658431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.917 [2024-07-22 18:18:08.865114] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:58.550  Copying: 512/512 [B] (average 500 kBps) 00:10:58.550 00:10:58.550 18:18:10 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:58.550 18:18:10 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1721672288 )) 00:10:58.550 00:10:58.550 real 0m5.112s 00:10:58.550 user 0m3.309s 00:10:58.550 sys 0m0.550s 00:10:58.550 18:18:10 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:58.550 18:18:10 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:58.550 ************************************ 00:10:58.550 END TEST dd_flag_noatime_forced_aio 00:10:58.550 ************************************ 00:10:58.550 18:18:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:10:58.550 18:18:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:10:58.550 18:18:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:58.550 18:18:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:58.550 18:18:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:10:58.550 ************************************ 00:10:58.550 START TEST dd_flags_misc_forced_aio 00:10:58.550 ************************************ 00:10:58.550 18:18:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1123 -- # io 00:10:58.550 18:18:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:10:58.550 18:18:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:10:58.550 18:18:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:10:58.550 18:18:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:10:58.550 18:18:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:10:58.550 18:18:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:10:58.550 18:18:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:10:58.550 18:18:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:10:58.550 18:18:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:10:58.550 [2024-07-22 18:18:10.373551] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:58.550 [2024-07-22 18:18:10.373727] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66109 ] 00:10:58.550 [2024-07-22 18:18:10.550931] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.808 [2024-07-22 18:18:10.793802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.066 [2024-07-22 18:18:10.996574] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:00.699  Copying: 512/512 [B] (average 500 kBps) 00:11:00.699 00:11:00.699 18:18:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 9g29kquhpcozwuhnvv99mxsqwwzkf52ffpvzhyctksqhgbbod25bp2umdoje8m9ssrtn4vonpjzk8ywn4kakj4b2rqoe97d7e9l1cf7elsxe63k0kk5ojgx54awzncg06q36ljhem8656jp13hrjm2ies94ono6221dgf87o3bjdfetm3kwrziapqctrxiuivg5zookah78q8kjju4gcrwf8gbpr1ytxh232qzmfktqrdayssm9vz2is1lz9odbrmp4utw2k4lzj45l9rqtvumcbihf6kj1fgnfqesjnd2j83p1lir0us4wzkqbg06uw9lh1gq11wvq70mqvny6sjz97dvnqff22568l66wor7diddzm7nn7zmbm31lnb23kxbfaffasgr37af0xzf54g06uuuwoa5q9xry2783d4anmn7gmt0d6u5sfyyasd1zhjovzana6n1khceayzttm9pt6iyjcxnwxll21cbhjqh4s4vxzh4r8dtd354ra4x24 == \9\g\2\9\k\q\u\h\p\c\o\z\w\u\h\n\v\v\9\9\m\x\s\q\w\w\z\k\f\5\2\f\f\p\v\z\h\y\c\t\k\s\q\h\g\b\b\o\d\2\5\b\p\2\u\m\d\o\j\e\8\m\9\s\s\r\t\n\4\v\o\n\p\j\z\k\8\y\w\n\4\k\a\k\j\4\b\2\r\q\o\e\9\7\d\7\e\9\l\1\c\f\7\e\l\s\x\e\6\3\k\0\k\k\5\o\j\g\x\5\4\a\w\z\n\c\g\0\6\q\3\6\l\j\h\e\m\8\6\5\6\j\p\1\3\h\r\j\m\2\i\e\s\9\4\o\n\o\6\2\2\1\d\g\f\8\7\o\3\b\j\d\f\e\t\m\3\k\w\r\z\i\a\p\q\c\t\r\x\i\u\i\v\g\5\z\o\o\k\a\h\7\8\q\8\k\j\j\u\4\g\c\r\w\f\8\g\b\p\r\1\y\t\x\h\2\3\2\q\z\m\f\k\t\q\r\d\a\y\s\s\m\9\v\z\2\i\s\1\l\z\9\o\d\b\r\m\p\4\u\t\w\2\k\4\l\z\j\4\5\l\9\r\q\t\v\u\m\c\b\i\h\f\6\k\j\1\f\g\n\f\q\e\s\j\n\d\2\j\8\3\p\1\l\i\r\0\u\s\4\w\z\k\q\b\g\0\6\u\w\9\l\h\1\g\q\1\1\w\v\q\7\0\m\q\v\n\y\6\s\j\z\9\7\d\v\n\q\f\f\2\2\5\6\8\l\6\6\w\o\r\7\d\i\d\d\z\m\7\n\n\7\z\m\b\m\3\1\l\n\b\2\3\k\x\b\f\a\f\f\a\s\g\r\3\7\a\f\0\x\z\f\5\4\g\0\6\u\u\u\w\o\a\5\q\9\x\r\y\2\7\8\3\d\4\a\n\m\n\7\g\m\t\0\d\6\u\5\s\f\y\y\a\s\d\1\z\h\j\o\v\z\a\n\a\6\n\1\k\h\c\e\a\y\z\t\t\m\9\p\t\6\i\y\j\c\x\n\w\x\l\l\2\1\c\b\h\j\q\h\4\s\4\v\x\z\h\4\r\8\d\t\d\3\5\4\r\a\4\x\2\4 ]] 00:11:00.699 18:18:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:00.699 18:18:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:11:00.699 [2024-07-22 18:18:12.402893] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:00.699 [2024-07-22 18:18:12.403049] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66130 ] 00:11:00.699 [2024-07-22 18:18:12.569780] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.957 [2024-07-22 18:18:12.808969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.216 [2024-07-22 18:18:13.016005] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:02.678  Copying: 512/512 [B] (average 500 kBps) 00:11:02.678 00:11:02.678 18:18:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 9g29kquhpcozwuhnvv99mxsqwwzkf52ffpvzhyctksqhgbbod25bp2umdoje8m9ssrtn4vonpjzk8ywn4kakj4b2rqoe97d7e9l1cf7elsxe63k0kk5ojgx54awzncg06q36ljhem8656jp13hrjm2ies94ono6221dgf87o3bjdfetm3kwrziapqctrxiuivg5zookah78q8kjju4gcrwf8gbpr1ytxh232qzmfktqrdayssm9vz2is1lz9odbrmp4utw2k4lzj45l9rqtvumcbihf6kj1fgnfqesjnd2j83p1lir0us4wzkqbg06uw9lh1gq11wvq70mqvny6sjz97dvnqff22568l66wor7diddzm7nn7zmbm31lnb23kxbfaffasgr37af0xzf54g06uuuwoa5q9xry2783d4anmn7gmt0d6u5sfyyasd1zhjovzana6n1khceayzttm9pt6iyjcxnwxll21cbhjqh4s4vxzh4r8dtd354ra4x24 == \9\g\2\9\k\q\u\h\p\c\o\z\w\u\h\n\v\v\9\9\m\x\s\q\w\w\z\k\f\5\2\f\f\p\v\z\h\y\c\t\k\s\q\h\g\b\b\o\d\2\5\b\p\2\u\m\d\o\j\e\8\m\9\s\s\r\t\n\4\v\o\n\p\j\z\k\8\y\w\n\4\k\a\k\j\4\b\2\r\q\o\e\9\7\d\7\e\9\l\1\c\f\7\e\l\s\x\e\6\3\k\0\k\k\5\o\j\g\x\5\4\a\w\z\n\c\g\0\6\q\3\6\l\j\h\e\m\8\6\5\6\j\p\1\3\h\r\j\m\2\i\e\s\9\4\o\n\o\6\2\2\1\d\g\f\8\7\o\3\b\j\d\f\e\t\m\3\k\w\r\z\i\a\p\q\c\t\r\x\i\u\i\v\g\5\z\o\o\k\a\h\7\8\q\8\k\j\j\u\4\g\c\r\w\f\8\g\b\p\r\1\y\t\x\h\2\3\2\q\z\m\f\k\t\q\r\d\a\y\s\s\m\9\v\z\2\i\s\1\l\z\9\o\d\b\r\m\p\4\u\t\w\2\k\4\l\z\j\4\5\l\9\r\q\t\v\u\m\c\b\i\h\f\6\k\j\1\f\g\n\f\q\e\s\j\n\d\2\j\8\3\p\1\l\i\r\0\u\s\4\w\z\k\q\b\g\0\6\u\w\9\l\h\1\g\q\1\1\w\v\q\7\0\m\q\v\n\y\6\s\j\z\9\7\d\v\n\q\f\f\2\2\5\6\8\l\6\6\w\o\r\7\d\i\d\d\z\m\7\n\n\7\z\m\b\m\3\1\l\n\b\2\3\k\x\b\f\a\f\f\a\s\g\r\3\7\a\f\0\x\z\f\5\4\g\0\6\u\u\u\w\o\a\5\q\9\x\r\y\2\7\8\3\d\4\a\n\m\n\7\g\m\t\0\d\6\u\5\s\f\y\y\a\s\d\1\z\h\j\o\v\z\a\n\a\6\n\1\k\h\c\e\a\y\z\t\t\m\9\p\t\6\i\y\j\c\x\n\w\x\l\l\2\1\c\b\h\j\q\h\4\s\4\v\x\z\h\4\r\8\d\t\d\3\5\4\r\a\4\x\2\4 ]] 00:11:02.678 18:18:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:02.678 18:18:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:11:02.678 [2024-07-22 18:18:14.433268] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:02.678 [2024-07-22 18:18:14.433445] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66155 ] 00:11:02.678 [2024-07-22 18:18:14.608078] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.937 [2024-07-22 18:18:14.863826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.195 [2024-07-22 18:18:15.080657] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:04.392  Copying: 512/512 [B] (average 500 kBps) 00:11:04.392 00:11:04.651 18:18:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 9g29kquhpcozwuhnvv99mxsqwwzkf52ffpvzhyctksqhgbbod25bp2umdoje8m9ssrtn4vonpjzk8ywn4kakj4b2rqoe97d7e9l1cf7elsxe63k0kk5ojgx54awzncg06q36ljhem8656jp13hrjm2ies94ono6221dgf87o3bjdfetm3kwrziapqctrxiuivg5zookah78q8kjju4gcrwf8gbpr1ytxh232qzmfktqrdayssm9vz2is1lz9odbrmp4utw2k4lzj45l9rqtvumcbihf6kj1fgnfqesjnd2j83p1lir0us4wzkqbg06uw9lh1gq11wvq70mqvny6sjz97dvnqff22568l66wor7diddzm7nn7zmbm31lnb23kxbfaffasgr37af0xzf54g06uuuwoa5q9xry2783d4anmn7gmt0d6u5sfyyasd1zhjovzana6n1khceayzttm9pt6iyjcxnwxll21cbhjqh4s4vxzh4r8dtd354ra4x24 == \9\g\2\9\k\q\u\h\p\c\o\z\w\u\h\n\v\v\9\9\m\x\s\q\w\w\z\k\f\5\2\f\f\p\v\z\h\y\c\t\k\s\q\h\g\b\b\o\d\2\5\b\p\2\u\m\d\o\j\e\8\m\9\s\s\r\t\n\4\v\o\n\p\j\z\k\8\y\w\n\4\k\a\k\j\4\b\2\r\q\o\e\9\7\d\7\e\9\l\1\c\f\7\e\l\s\x\e\6\3\k\0\k\k\5\o\j\g\x\5\4\a\w\z\n\c\g\0\6\q\3\6\l\j\h\e\m\8\6\5\6\j\p\1\3\h\r\j\m\2\i\e\s\9\4\o\n\o\6\2\2\1\d\g\f\8\7\o\3\b\j\d\f\e\t\m\3\k\w\r\z\i\a\p\q\c\t\r\x\i\u\i\v\g\5\z\o\o\k\a\h\7\8\q\8\k\j\j\u\4\g\c\r\w\f\8\g\b\p\r\1\y\t\x\h\2\3\2\q\z\m\f\k\t\q\r\d\a\y\s\s\m\9\v\z\2\i\s\1\l\z\9\o\d\b\r\m\p\4\u\t\w\2\k\4\l\z\j\4\5\l\9\r\q\t\v\u\m\c\b\i\h\f\6\k\j\1\f\g\n\f\q\e\s\j\n\d\2\j\8\3\p\1\l\i\r\0\u\s\4\w\z\k\q\b\g\0\6\u\w\9\l\h\1\g\q\1\1\w\v\q\7\0\m\q\v\n\y\6\s\j\z\9\7\d\v\n\q\f\f\2\2\5\6\8\l\6\6\w\o\r\7\d\i\d\d\z\m\7\n\n\7\z\m\b\m\3\1\l\n\b\2\3\k\x\b\f\a\f\f\a\s\g\r\3\7\a\f\0\x\z\f\5\4\g\0\6\u\u\u\w\o\a\5\q\9\x\r\y\2\7\8\3\d\4\a\n\m\n\7\g\m\t\0\d\6\u\5\s\f\y\y\a\s\d\1\z\h\j\o\v\z\a\n\a\6\n\1\k\h\c\e\a\y\z\t\t\m\9\p\t\6\i\y\j\c\x\n\w\x\l\l\2\1\c\b\h\j\q\h\4\s\4\v\x\z\h\4\r\8\d\t\d\3\5\4\r\a\4\x\2\4 ]] 00:11:04.651 18:18:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:04.651 18:18:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:11:04.651 [2024-07-22 18:18:16.520244] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:04.651 [2024-07-22 18:18:16.520414] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66180 ] 00:11:04.910 [2024-07-22 18:18:16.685703] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.910 [2024-07-22 18:18:16.925088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.168 [2024-07-22 18:18:17.128139] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:06.800  Copying: 512/512 [B] (average 500 kBps) 00:11:06.800 00:11:06.800 18:18:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 9g29kquhpcozwuhnvv99mxsqwwzkf52ffpvzhyctksqhgbbod25bp2umdoje8m9ssrtn4vonpjzk8ywn4kakj4b2rqoe97d7e9l1cf7elsxe63k0kk5ojgx54awzncg06q36ljhem8656jp13hrjm2ies94ono6221dgf87o3bjdfetm3kwrziapqctrxiuivg5zookah78q8kjju4gcrwf8gbpr1ytxh232qzmfktqrdayssm9vz2is1lz9odbrmp4utw2k4lzj45l9rqtvumcbihf6kj1fgnfqesjnd2j83p1lir0us4wzkqbg06uw9lh1gq11wvq70mqvny6sjz97dvnqff22568l66wor7diddzm7nn7zmbm31lnb23kxbfaffasgr37af0xzf54g06uuuwoa5q9xry2783d4anmn7gmt0d6u5sfyyasd1zhjovzana6n1khceayzttm9pt6iyjcxnwxll21cbhjqh4s4vxzh4r8dtd354ra4x24 == \9\g\2\9\k\q\u\h\p\c\o\z\w\u\h\n\v\v\9\9\m\x\s\q\w\w\z\k\f\5\2\f\f\p\v\z\h\y\c\t\k\s\q\h\g\b\b\o\d\2\5\b\p\2\u\m\d\o\j\e\8\m\9\s\s\r\t\n\4\v\o\n\p\j\z\k\8\y\w\n\4\k\a\k\j\4\b\2\r\q\o\e\9\7\d\7\e\9\l\1\c\f\7\e\l\s\x\e\6\3\k\0\k\k\5\o\j\g\x\5\4\a\w\z\n\c\g\0\6\q\3\6\l\j\h\e\m\8\6\5\6\j\p\1\3\h\r\j\m\2\i\e\s\9\4\o\n\o\6\2\2\1\d\g\f\8\7\o\3\b\j\d\f\e\t\m\3\k\w\r\z\i\a\p\q\c\t\r\x\i\u\i\v\g\5\z\o\o\k\a\h\7\8\q\8\k\j\j\u\4\g\c\r\w\f\8\g\b\p\r\1\y\t\x\h\2\3\2\q\z\m\f\k\t\q\r\d\a\y\s\s\m\9\v\z\2\i\s\1\l\z\9\o\d\b\r\m\p\4\u\t\w\2\k\4\l\z\j\4\5\l\9\r\q\t\v\u\m\c\b\i\h\f\6\k\j\1\f\g\n\f\q\e\s\j\n\d\2\j\8\3\p\1\l\i\r\0\u\s\4\w\z\k\q\b\g\0\6\u\w\9\l\h\1\g\q\1\1\w\v\q\7\0\m\q\v\n\y\6\s\j\z\9\7\d\v\n\q\f\f\2\2\5\6\8\l\6\6\w\o\r\7\d\i\d\d\z\m\7\n\n\7\z\m\b\m\3\1\l\n\b\2\3\k\x\b\f\a\f\f\a\s\g\r\3\7\a\f\0\x\z\f\5\4\g\0\6\u\u\u\w\o\a\5\q\9\x\r\y\2\7\8\3\d\4\a\n\m\n\7\g\m\t\0\d\6\u\5\s\f\y\y\a\s\d\1\z\h\j\o\v\z\a\n\a\6\n\1\k\h\c\e\a\y\z\t\t\m\9\p\t\6\i\y\j\c\x\n\w\x\l\l\2\1\c\b\h\j\q\h\4\s\4\v\x\z\h\4\r\8\d\t\d\3\5\4\r\a\4\x\2\4 ]] 00:11:06.800 18:18:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:11:06.800 18:18:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:11:06.800 18:18:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:11:06.800 18:18:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:11:06.800 18:18:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:06.800 18:18:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:11:06.800 [2024-07-22 18:18:18.564104] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:06.800 [2024-07-22 18:18:18.564362] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66205 ] 00:11:06.800 [2024-07-22 18:18:18.746367] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.059 [2024-07-22 18:18:18.985943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.317 [2024-07-22 18:18:19.188488] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:08.689  Copying: 512/512 [B] (average 500 kBps) 00:11:08.689 00:11:08.690 18:18:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ qi1rvf1y1v2sn5o7fdx0au5uhuu4hfdgys60p34qh9nzu23qudkdle0lx5d1z81rs24nnufh4sdtaeyezz3v4k25hbdbq0wuulaux88jc9jwuwqobwh0m5o49rhugtf1bo45ec1xo5h31ubuom7q3s392mds63iihkzuv918j5xvyp8lv1qrbvf6w6ejor2amddly9oz1z9d5ftwzuozzy1olg6c3a6rcwb55qh92dymgznfo9wm4jbpm4vym94ceery8n792abpedyxgcyuj2dx1wuwc9n048t9d9dxt8q95smmsj6giqkl84day7t2t6s1r5orh5kd01c0yvt7vdddda52u6kri29wyyx8fdabf0ilr7s18gvfie1cbxqvueftgvr41m9ol0g0cst8xl749qqs08e0j0hvkvcdoux2yy7vueds2yd1mil4uzdhse99ukuka0yx2bjd6kctjcid4tm03nx0so7v5bwqlwx8kvotth8zsd7d7ptyxdft == \q\i\1\r\v\f\1\y\1\v\2\s\n\5\o\7\f\d\x\0\a\u\5\u\h\u\u\4\h\f\d\g\y\s\6\0\p\3\4\q\h\9\n\z\u\2\3\q\u\d\k\d\l\e\0\l\x\5\d\1\z\8\1\r\s\2\4\n\n\u\f\h\4\s\d\t\a\e\y\e\z\z\3\v\4\k\2\5\h\b\d\b\q\0\w\u\u\l\a\u\x\8\8\j\c\9\j\w\u\w\q\o\b\w\h\0\m\5\o\4\9\r\h\u\g\t\f\1\b\o\4\5\e\c\1\x\o\5\h\3\1\u\b\u\o\m\7\q\3\s\3\9\2\m\d\s\6\3\i\i\h\k\z\u\v\9\1\8\j\5\x\v\y\p\8\l\v\1\q\r\b\v\f\6\w\6\e\j\o\r\2\a\m\d\d\l\y\9\o\z\1\z\9\d\5\f\t\w\z\u\o\z\z\y\1\o\l\g\6\c\3\a\6\r\c\w\b\5\5\q\h\9\2\d\y\m\g\z\n\f\o\9\w\m\4\j\b\p\m\4\v\y\m\9\4\c\e\e\r\y\8\n\7\9\2\a\b\p\e\d\y\x\g\c\y\u\j\2\d\x\1\w\u\w\c\9\n\0\4\8\t\9\d\9\d\x\t\8\q\9\5\s\m\m\s\j\6\g\i\q\k\l\8\4\d\a\y\7\t\2\t\6\s\1\r\5\o\r\h\5\k\d\0\1\c\0\y\v\t\7\v\d\d\d\d\a\5\2\u\6\k\r\i\2\9\w\y\y\x\8\f\d\a\b\f\0\i\l\r\7\s\1\8\g\v\f\i\e\1\c\b\x\q\v\u\e\f\t\g\v\r\4\1\m\9\o\l\0\g\0\c\s\t\8\x\l\7\4\9\q\q\s\0\8\e\0\j\0\h\v\k\v\c\d\o\u\x\2\y\y\7\v\u\e\d\s\2\y\d\1\m\i\l\4\u\z\d\h\s\e\9\9\u\k\u\k\a\0\y\x\2\b\j\d\6\k\c\t\j\c\i\d\4\t\m\0\3\n\x\0\s\o\7\v\5\b\w\q\l\w\x\8\k\v\o\t\t\h\8\z\s\d\7\d\7\p\t\y\x\d\f\t ]] 00:11:08.690 18:18:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:08.690 18:18:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:11:08.690 [2024-07-22 18:18:20.665158] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:08.690 [2024-07-22 18:18:20.665355] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66230 ] 00:11:08.948 [2024-07-22 18:18:20.838707] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.216 [2024-07-22 18:18:21.080864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.491 [2024-07-22 18:18:21.286201] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:10.865  Copying: 512/512 [B] (average 500 kBps) 00:11:10.865 00:11:10.865 18:18:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ qi1rvf1y1v2sn5o7fdx0au5uhuu4hfdgys60p34qh9nzu23qudkdle0lx5d1z81rs24nnufh4sdtaeyezz3v4k25hbdbq0wuulaux88jc9jwuwqobwh0m5o49rhugtf1bo45ec1xo5h31ubuom7q3s392mds63iihkzuv918j5xvyp8lv1qrbvf6w6ejor2amddly9oz1z9d5ftwzuozzy1olg6c3a6rcwb55qh92dymgznfo9wm4jbpm4vym94ceery8n792abpedyxgcyuj2dx1wuwc9n048t9d9dxt8q95smmsj6giqkl84day7t2t6s1r5orh5kd01c0yvt7vdddda52u6kri29wyyx8fdabf0ilr7s18gvfie1cbxqvueftgvr41m9ol0g0cst8xl749qqs08e0j0hvkvcdoux2yy7vueds2yd1mil4uzdhse99ukuka0yx2bjd6kctjcid4tm03nx0so7v5bwqlwx8kvotth8zsd7d7ptyxdft == \q\i\1\r\v\f\1\y\1\v\2\s\n\5\o\7\f\d\x\0\a\u\5\u\h\u\u\4\h\f\d\g\y\s\6\0\p\3\4\q\h\9\n\z\u\2\3\q\u\d\k\d\l\e\0\l\x\5\d\1\z\8\1\r\s\2\4\n\n\u\f\h\4\s\d\t\a\e\y\e\z\z\3\v\4\k\2\5\h\b\d\b\q\0\w\u\u\l\a\u\x\8\8\j\c\9\j\w\u\w\q\o\b\w\h\0\m\5\o\4\9\r\h\u\g\t\f\1\b\o\4\5\e\c\1\x\o\5\h\3\1\u\b\u\o\m\7\q\3\s\3\9\2\m\d\s\6\3\i\i\h\k\z\u\v\9\1\8\j\5\x\v\y\p\8\l\v\1\q\r\b\v\f\6\w\6\e\j\o\r\2\a\m\d\d\l\y\9\o\z\1\z\9\d\5\f\t\w\z\u\o\z\z\y\1\o\l\g\6\c\3\a\6\r\c\w\b\5\5\q\h\9\2\d\y\m\g\z\n\f\o\9\w\m\4\j\b\p\m\4\v\y\m\9\4\c\e\e\r\y\8\n\7\9\2\a\b\p\e\d\y\x\g\c\y\u\j\2\d\x\1\w\u\w\c\9\n\0\4\8\t\9\d\9\d\x\t\8\q\9\5\s\m\m\s\j\6\g\i\q\k\l\8\4\d\a\y\7\t\2\t\6\s\1\r\5\o\r\h\5\k\d\0\1\c\0\y\v\t\7\v\d\d\d\d\a\5\2\u\6\k\r\i\2\9\w\y\y\x\8\f\d\a\b\f\0\i\l\r\7\s\1\8\g\v\f\i\e\1\c\b\x\q\v\u\e\f\t\g\v\r\4\1\m\9\o\l\0\g\0\c\s\t\8\x\l\7\4\9\q\q\s\0\8\e\0\j\0\h\v\k\v\c\d\o\u\x\2\y\y\7\v\u\e\d\s\2\y\d\1\m\i\l\4\u\z\d\h\s\e\9\9\u\k\u\k\a\0\y\x\2\b\j\d\6\k\c\t\j\c\i\d\4\t\m\0\3\n\x\0\s\o\7\v\5\b\w\q\l\w\x\8\k\v\o\t\t\h\8\z\s\d\7\d\7\p\t\y\x\d\f\t ]] 00:11:10.865 18:18:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:10.865 18:18:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:11:10.865 [2024-07-22 18:18:22.695689] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:10.865 [2024-07-22 18:18:22.695847] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66255 ] 00:11:10.865 [2024-07-22 18:18:22.862495] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.123 [2024-07-22 18:18:23.109557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.424 [2024-07-22 18:18:23.316218] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:12.735  Copying: 512/512 [B] (average 500 kBps) 00:11:12.735 00:11:12.735 18:18:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ qi1rvf1y1v2sn5o7fdx0au5uhuu4hfdgys60p34qh9nzu23qudkdle0lx5d1z81rs24nnufh4sdtaeyezz3v4k25hbdbq0wuulaux88jc9jwuwqobwh0m5o49rhugtf1bo45ec1xo5h31ubuom7q3s392mds63iihkzuv918j5xvyp8lv1qrbvf6w6ejor2amddly9oz1z9d5ftwzuozzy1olg6c3a6rcwb55qh92dymgznfo9wm4jbpm4vym94ceery8n792abpedyxgcyuj2dx1wuwc9n048t9d9dxt8q95smmsj6giqkl84day7t2t6s1r5orh5kd01c0yvt7vdddda52u6kri29wyyx8fdabf0ilr7s18gvfie1cbxqvueftgvr41m9ol0g0cst8xl749qqs08e0j0hvkvcdoux2yy7vueds2yd1mil4uzdhse99ukuka0yx2bjd6kctjcid4tm03nx0so7v5bwqlwx8kvotth8zsd7d7ptyxdft == \q\i\1\r\v\f\1\y\1\v\2\s\n\5\o\7\f\d\x\0\a\u\5\u\h\u\u\4\h\f\d\g\y\s\6\0\p\3\4\q\h\9\n\z\u\2\3\q\u\d\k\d\l\e\0\l\x\5\d\1\z\8\1\r\s\2\4\n\n\u\f\h\4\s\d\t\a\e\y\e\z\z\3\v\4\k\2\5\h\b\d\b\q\0\w\u\u\l\a\u\x\8\8\j\c\9\j\w\u\w\q\o\b\w\h\0\m\5\o\4\9\r\h\u\g\t\f\1\b\o\4\5\e\c\1\x\o\5\h\3\1\u\b\u\o\m\7\q\3\s\3\9\2\m\d\s\6\3\i\i\h\k\z\u\v\9\1\8\j\5\x\v\y\p\8\l\v\1\q\r\b\v\f\6\w\6\e\j\o\r\2\a\m\d\d\l\y\9\o\z\1\z\9\d\5\f\t\w\z\u\o\z\z\y\1\o\l\g\6\c\3\a\6\r\c\w\b\5\5\q\h\9\2\d\y\m\g\z\n\f\o\9\w\m\4\j\b\p\m\4\v\y\m\9\4\c\e\e\r\y\8\n\7\9\2\a\b\p\e\d\y\x\g\c\y\u\j\2\d\x\1\w\u\w\c\9\n\0\4\8\t\9\d\9\d\x\t\8\q\9\5\s\m\m\s\j\6\g\i\q\k\l\8\4\d\a\y\7\t\2\t\6\s\1\r\5\o\r\h\5\k\d\0\1\c\0\y\v\t\7\v\d\d\d\d\a\5\2\u\6\k\r\i\2\9\w\y\y\x\8\f\d\a\b\f\0\i\l\r\7\s\1\8\g\v\f\i\e\1\c\b\x\q\v\u\e\f\t\g\v\r\4\1\m\9\o\l\0\g\0\c\s\t\8\x\l\7\4\9\q\q\s\0\8\e\0\j\0\h\v\k\v\c\d\o\u\x\2\y\y\7\v\u\e\d\s\2\y\d\1\m\i\l\4\u\z\d\h\s\e\9\9\u\k\u\k\a\0\y\x\2\b\j\d\6\k\c\t\j\c\i\d\4\t\m\0\3\n\x\0\s\o\7\v\5\b\w\q\l\w\x\8\k\v\o\t\t\h\8\z\s\d\7\d\7\p\t\y\x\d\f\t ]] 00:11:12.735 18:18:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:12.735 18:18:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:11:12.991 [2024-07-22 18:18:24.755226] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:12.991 [2024-07-22 18:18:24.755379] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66280 ] 00:11:12.992 [2024-07-22 18:18:24.921804] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.247 [2024-07-22 18:18:25.170387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.505 [2024-07-22 18:18:25.375281] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:14.884  Copying: 512/512 [B] (average 250 kBps) 00:11:14.884 00:11:14.884 18:18:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ qi1rvf1y1v2sn5o7fdx0au5uhuu4hfdgys60p34qh9nzu23qudkdle0lx5d1z81rs24nnufh4sdtaeyezz3v4k25hbdbq0wuulaux88jc9jwuwqobwh0m5o49rhugtf1bo45ec1xo5h31ubuom7q3s392mds63iihkzuv918j5xvyp8lv1qrbvf6w6ejor2amddly9oz1z9d5ftwzuozzy1olg6c3a6rcwb55qh92dymgznfo9wm4jbpm4vym94ceery8n792abpedyxgcyuj2dx1wuwc9n048t9d9dxt8q95smmsj6giqkl84day7t2t6s1r5orh5kd01c0yvt7vdddda52u6kri29wyyx8fdabf0ilr7s18gvfie1cbxqvueftgvr41m9ol0g0cst8xl749qqs08e0j0hvkvcdoux2yy7vueds2yd1mil4uzdhse99ukuka0yx2bjd6kctjcid4tm03nx0so7v5bwqlwx8kvotth8zsd7d7ptyxdft == \q\i\1\r\v\f\1\y\1\v\2\s\n\5\o\7\f\d\x\0\a\u\5\u\h\u\u\4\h\f\d\g\y\s\6\0\p\3\4\q\h\9\n\z\u\2\3\q\u\d\k\d\l\e\0\l\x\5\d\1\z\8\1\r\s\2\4\n\n\u\f\h\4\s\d\t\a\e\y\e\z\z\3\v\4\k\2\5\h\b\d\b\q\0\w\u\u\l\a\u\x\8\8\j\c\9\j\w\u\w\q\o\b\w\h\0\m\5\o\4\9\r\h\u\g\t\f\1\b\o\4\5\e\c\1\x\o\5\h\3\1\u\b\u\o\m\7\q\3\s\3\9\2\m\d\s\6\3\i\i\h\k\z\u\v\9\1\8\j\5\x\v\y\p\8\l\v\1\q\r\b\v\f\6\w\6\e\j\o\r\2\a\m\d\d\l\y\9\o\z\1\z\9\d\5\f\t\w\z\u\o\z\z\y\1\o\l\g\6\c\3\a\6\r\c\w\b\5\5\q\h\9\2\d\y\m\g\z\n\f\o\9\w\m\4\j\b\p\m\4\v\y\m\9\4\c\e\e\r\y\8\n\7\9\2\a\b\p\e\d\y\x\g\c\y\u\j\2\d\x\1\w\u\w\c\9\n\0\4\8\t\9\d\9\d\x\t\8\q\9\5\s\m\m\s\j\6\g\i\q\k\l\8\4\d\a\y\7\t\2\t\6\s\1\r\5\o\r\h\5\k\d\0\1\c\0\y\v\t\7\v\d\d\d\d\a\5\2\u\6\k\r\i\2\9\w\y\y\x\8\f\d\a\b\f\0\i\l\r\7\s\1\8\g\v\f\i\e\1\c\b\x\q\v\u\e\f\t\g\v\r\4\1\m\9\o\l\0\g\0\c\s\t\8\x\l\7\4\9\q\q\s\0\8\e\0\j\0\h\v\k\v\c\d\o\u\x\2\y\y\7\v\u\e\d\s\2\y\d\1\m\i\l\4\u\z\d\h\s\e\9\9\u\k\u\k\a\0\y\x\2\b\j\d\6\k\c\t\j\c\i\d\4\t\m\0\3\n\x\0\s\o\7\v\5\b\w\q\l\w\x\8\k\v\o\t\t\h\8\z\s\d\7\d\7\p\t\y\x\d\f\t ]] 00:11:14.884 00:11:14.884 real 0m16.464s 00:11:14.884 user 0m13.358s 00:11:14.884 sys 0m2.098s 00:11:14.884 ************************************ 00:11:14.884 END TEST dd_flags_misc_forced_aio 00:11:14.884 ************************************ 00:11:14.884 18:18:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:14.884 18:18:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:11:14.885 18:18:26 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:11:14.885 18:18:26 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:11:14.885 18:18:26 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:11:14.885 18:18:26 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:11:14.885 ************************************ 00:11:14.885 END TEST spdk_dd_posix 00:11:14.885 ************************************ 00:11:14.885 00:11:14.885 real 1m8.172s 00:11:14.885 user 0m53.551s 00:11:14.885 sys 0m17.714s 00:11:14.885 18:18:26 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:14.885 18:18:26 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:11:14.885 18:18:26 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:11:14.885 18:18:26 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:11:14.885 18:18:26 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:14.885 18:18:26 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:14.885 18:18:26 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:11:14.885 ************************************ 00:11:14.885 START TEST spdk_dd_malloc 00:11:14.885 ************************************ 00:11:14.885 18:18:26 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:11:14.885 * Looking for test storage... 00:11:14.885 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:11:14.885 18:18:26 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:14.885 18:18:26 spdk_dd.spdk_dd_malloc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:14.885 18:18:26 spdk_dd.spdk_dd_malloc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:14.885 18:18:26 spdk_dd.spdk_dd_malloc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:14.885 18:18:26 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.885 18:18:26 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.885 18:18:26 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.885 18:18:26 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:11:14.885 18:18:26 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.885 18:18:26 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:11:14.885 18:18:26 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:14.885 18:18:26 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:14.885 18:18:26 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:11:15.142 ************************************ 00:11:15.142 START TEST dd_malloc_copy 00:11:15.142 ************************************ 00:11:15.142 18:18:26 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1123 -- # malloc_copy 00:11:15.142 18:18:26 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:11:15.142 18:18:26 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:11:15.142 18:18:26 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:11:15.142 18:18:26 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:11:15.142 18:18:26 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:11:15.142 18:18:26 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:11:15.142 18:18:26 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:11:15.142 18:18:26 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:11:15.142 18:18:26 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:15.142 18:18:26 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:11:15.142 { 00:11:15.142 "subsystems": [ 00:11:15.142 { 00:11:15.142 "subsystem": "bdev", 00:11:15.142 "config": [ 00:11:15.142 { 00:11:15.142 "params": { 00:11:15.142 "block_size": 512, 00:11:15.143 "num_blocks": 1048576, 00:11:15.143 "name": "malloc0" 00:11:15.143 }, 00:11:15.143 "method": "bdev_malloc_create" 00:11:15.143 }, 00:11:15.143 { 00:11:15.143 "params": { 00:11:15.143 "block_size": 512, 00:11:15.143 "num_blocks": 1048576, 00:11:15.143 "name": "malloc1" 00:11:15.143 }, 00:11:15.143 "method": "bdev_malloc_create" 00:11:15.143 }, 00:11:15.143 { 00:11:15.143 "method": "bdev_wait_for_examine" 00:11:15.143 } 00:11:15.143 ] 00:11:15.143 } 00:11:15.143 ] 00:11:15.143 } 00:11:15.143 [2024-07-22 18:18:27.011489] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:15.143 [2024-07-22 18:18:27.011671] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66367 ] 00:11:15.400 [2024-07-22 18:18:27.189380] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.658 [2024-07-22 18:18:27.479353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.915 [2024-07-22 18:18:27.684420] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:24.468  Copying: 152/512 [MB] (152 MBps) Copying: 303/512 [MB] (151 MBps) Copying: 455/512 [MB] (152 MBps) Copying: 512/512 [MB] (average 151 MBps) 00:11:24.468 00:11:24.468 18:18:36 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:11:24.468 18:18:36 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:11:24.468 18:18:36 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:24.468 18:18:36 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:11:24.468 { 00:11:24.468 "subsystems": [ 00:11:24.468 { 00:11:24.468 "subsystem": "bdev", 00:11:24.468 "config": [ 00:11:24.468 { 00:11:24.468 "params": { 00:11:24.468 "block_size": 512, 00:11:24.468 "num_blocks": 1048576, 00:11:24.468 "name": "malloc0" 00:11:24.468 }, 00:11:24.468 "method": "bdev_malloc_create" 00:11:24.468 }, 00:11:24.468 { 00:11:24.468 "params": { 00:11:24.468 "block_size": 512, 00:11:24.468 "num_blocks": 1048576, 00:11:24.468 "name": "malloc1" 00:11:24.468 }, 00:11:24.468 "method": "bdev_malloc_create" 00:11:24.468 }, 00:11:24.468 { 00:11:24.468 "method": "bdev_wait_for_examine" 00:11:24.468 } 00:11:24.468 ] 00:11:24.468 } 00:11:24.468 ] 00:11:24.468 } 00:11:24.468 [2024-07-22 18:18:36.187997] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:24.468 [2024-07-22 18:18:36.188173] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66471 ] 00:11:24.468 [2024-07-22 18:18:36.364482] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.727 [2024-07-22 18:18:36.624583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.987 [2024-07-22 18:18:36.834518] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:33.274  Copying: 148/512 [MB] (148 MBps) Copying: 295/512 [MB] (146 MBps) Copying: 450/512 [MB] (154 MBps) Copying: 512/512 [MB] (average 150 MBps) 00:11:33.274 00:11:33.534 00:11:33.534 real 0m18.402s 00:11:33.534 user 0m16.850s 00:11:33.534 sys 0m1.322s 00:11:33.534 18:18:45 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:33.534 ************************************ 00:11:33.534 18:18:45 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:11:33.534 END TEST dd_malloc_copy 00:11:33.534 ************************************ 00:11:33.534 18:18:45 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1142 -- # return 0 00:11:33.534 00:11:33.534 real 0m18.535s 00:11:33.534 user 0m16.905s 00:11:33.534 sys 0m1.398s 00:11:33.534 18:18:45 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:33.534 ************************************ 00:11:33.534 18:18:45 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:11:33.534 END TEST spdk_dd_malloc 00:11:33.534 ************************************ 00:11:33.534 18:18:45 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:11:33.534 18:18:45 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:11:33.534 18:18:45 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:33.534 18:18:45 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:33.534 18:18:45 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:11:33.534 ************************************ 00:11:33.534 START TEST spdk_dd_bdev_to_bdev 00:11:33.534 ************************************ 00:11:33.534 18:18:45 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:11:33.534 * Looking for test storage... 00:11:33.534 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:11:33.534 18:18:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:33.534 18:18:45 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:33.534 18:18:45 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:33.534 18:18:45 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:33.534 18:18:45 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.534 18:18:45 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.534 18:18:45 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.534 18:18:45 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:11:33.534 18:18:45 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.534 18:18:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:11:33.534 18:18:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:11:33.534 18:18:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:11:33.534 18:18:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:11:33.534 18:18:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:11:33.534 18:18:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:11:33.534 18:18:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:11:33.534 18:18:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:11:33.534 18:18:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:11:33.534 18:18:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:11:33.534 18:18:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:11:33.534 18:18:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:11:33.534 18:18:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:11:33.534 18:18:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:11:33.534 18:18:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:33.534 18:18:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:33.534 18:18:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:11:33.534 18:18:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:11:33.534 18:18:45 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:11:33.534 18:18:45 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:11:33.534 18:18:45 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:33.534 18:18:45 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:33.534 ************************************ 00:11:33.534 START TEST dd_inflate_file 00:11:33.534 ************************************ 00:11:33.534 18:18:45 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:11:33.793 [2024-07-22 18:18:45.594914] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:33.793 [2024-07-22 18:18:45.595757] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66638 ] 00:11:33.793 [2024-07-22 18:18:45.772613] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.052 [2024-07-22 18:18:46.052011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.310 [2024-07-22 18:18:46.256511] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:35.946  Copying: 64/64 [MB] (average 1684 MBps) 00:11:35.946 00:11:35.946 00:11:35.946 real 0m2.128s 00:11:35.946 user 0m1.751s 00:11:35.946 sys 0m1.073s 00:11:35.946 18:18:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:35.946 18:18:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:11:35.946 ************************************ 00:11:35.946 END TEST dd_inflate_file 00:11:35.946 ************************************ 00:11:35.946 18:18:47 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:11:35.946 18:18:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:11:35.946 18:18:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:11:35.946 18:18:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:11:35.946 18:18:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:11:35.946 18:18:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:11:35.946 18:18:47 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:11:35.946 18:18:47 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:35.946 18:18:47 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:35.946 18:18:47 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:35.946 ************************************ 00:11:35.946 START TEST dd_copy_to_out_bdev 00:11:35.946 ************************************ 00:11:35.946 18:18:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:11:35.946 { 00:11:35.946 "subsystems": [ 00:11:35.946 { 00:11:35.946 "subsystem": "bdev", 00:11:35.946 "config": [ 00:11:35.946 { 00:11:35.946 "params": { 00:11:35.946 "trtype": "pcie", 00:11:35.946 "traddr": "0000:00:10.0", 00:11:35.946 "name": "Nvme0" 00:11:35.946 }, 00:11:35.946 "method": "bdev_nvme_attach_controller" 00:11:35.946 }, 00:11:35.946 { 00:11:35.946 "params": { 00:11:35.946 "trtype": "pcie", 00:11:35.946 "traddr": "0000:00:11.0", 00:11:35.946 "name": "Nvme1" 00:11:35.946 }, 00:11:35.946 "method": "bdev_nvme_attach_controller" 00:11:35.946 }, 00:11:35.946 { 00:11:35.946 "method": "bdev_wait_for_examine" 00:11:35.946 } 00:11:35.946 ] 00:11:35.946 } 00:11:35.946 ] 00:11:35.946 } 00:11:35.946 [2024-07-22 18:18:47.795030] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:35.946 [2024-07-22 18:18:47.795842] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66691 ] 00:11:36.205 [2024-07-22 18:18:47.977541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:36.462 [2024-07-22 18:18:48.267764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.462 [2024-07-22 18:18:48.475112] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:39.471  Copying: 53/64 [MB] (53 MBps) Copying: 64/64 [MB] (average 53 MBps) 00:11:39.471 00:11:39.471 00:11:39.471 real 0m3.491s 00:11:39.471 user 0m3.108s 00:11:39.471 sys 0m2.298s 00:11:39.471 18:18:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:39.471 ************************************ 00:11:39.471 END TEST dd_copy_to_out_bdev 00:11:39.471 ************************************ 00:11:39.471 18:18:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:39.471 18:18:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:11:39.471 18:18:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:11:39.471 18:18:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:11:39.471 18:18:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:39.471 18:18:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:39.471 18:18:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:39.471 ************************************ 00:11:39.471 START TEST dd_offset_magic 00:11:39.471 ************************************ 00:11:39.471 18:18:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1123 -- # offset_magic 00:11:39.471 18:18:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:11:39.471 18:18:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:11:39.471 18:18:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:11:39.471 18:18:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:11:39.471 18:18:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:11:39.471 18:18:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:11:39.471 18:18:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:11:39.471 18:18:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:11:39.471 { 00:11:39.471 "subsystems": [ 00:11:39.471 { 00:11:39.471 "subsystem": "bdev", 00:11:39.471 "config": [ 00:11:39.471 { 00:11:39.471 "params": { 00:11:39.471 "trtype": "pcie", 00:11:39.471 "traddr": "0000:00:10.0", 00:11:39.471 "name": "Nvme0" 00:11:39.471 }, 00:11:39.471 "method": "bdev_nvme_attach_controller" 00:11:39.471 }, 00:11:39.471 { 00:11:39.471 "params": { 00:11:39.471 "trtype": "pcie", 00:11:39.471 "traddr": "0000:00:11.0", 00:11:39.471 "name": "Nvme1" 00:11:39.471 }, 00:11:39.471 "method": "bdev_nvme_attach_controller" 00:11:39.471 }, 00:11:39.471 { 00:11:39.471 "method": "bdev_wait_for_examine" 00:11:39.471 } 00:11:39.471 ] 00:11:39.471 } 00:11:39.471 ] 00:11:39.471 } 00:11:39.471 [2024-07-22 18:18:51.310455] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:39.471 [2024-07-22 18:18:51.310605] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66751 ] 00:11:39.471 [2024-07-22 18:18:51.473403] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:39.729 [2024-07-22 18:18:51.729976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.987 [2024-07-22 18:18:51.939835] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:41.497  Copying: 65/65 [MB] (average 902 MBps) 00:11:41.497 00:11:41.497 18:18:53 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:11:41.497 18:18:53 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:11:41.497 18:18:53 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:11:41.497 18:18:53 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:11:41.497 { 00:11:41.497 "subsystems": [ 00:11:41.497 { 00:11:41.497 "subsystem": "bdev", 00:11:41.497 "config": [ 00:11:41.497 { 00:11:41.497 "params": { 00:11:41.497 "trtype": "pcie", 00:11:41.497 "traddr": "0000:00:10.0", 00:11:41.497 "name": "Nvme0" 00:11:41.497 }, 00:11:41.497 "method": "bdev_nvme_attach_controller" 00:11:41.497 }, 00:11:41.497 { 00:11:41.497 "params": { 00:11:41.497 "trtype": "pcie", 00:11:41.497 "traddr": "0000:00:11.0", 00:11:41.497 "name": "Nvme1" 00:11:41.497 }, 00:11:41.497 "method": "bdev_nvme_attach_controller" 00:11:41.497 }, 00:11:41.497 { 00:11:41.497 "method": "bdev_wait_for_examine" 00:11:41.497 } 00:11:41.497 ] 00:11:41.497 } 00:11:41.497 ] 00:11:41.497 } 00:11:41.497 [2024-07-22 18:18:53.442858] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:41.497 [2024-07-22 18:18:53.443044] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66783 ] 00:11:41.756 [2024-07-22 18:18:53.626211] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:42.015 [2024-07-22 18:18:53.868458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.273 [2024-07-22 18:18:54.076465] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:43.942  Copying: 1024/1024 [kB] (average 500 MBps) 00:11:43.942 00:11:43.942 18:18:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:11:43.942 18:18:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:11:43.942 18:18:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:11:43.942 18:18:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:11:43.942 18:18:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:11:43.942 18:18:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:11:43.942 18:18:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:11:43.942 { 00:11:43.942 "subsystems": [ 00:11:43.942 { 00:11:43.942 "subsystem": "bdev", 00:11:43.942 "config": [ 00:11:43.942 { 00:11:43.942 "params": { 00:11:43.942 "trtype": "pcie", 00:11:43.942 "traddr": "0000:00:10.0", 00:11:43.942 "name": "Nvme0" 00:11:43.942 }, 00:11:43.942 "method": "bdev_nvme_attach_controller" 00:11:43.942 }, 00:11:43.942 { 00:11:43.942 "params": { 00:11:43.942 "trtype": "pcie", 00:11:43.942 "traddr": "0000:00:11.0", 00:11:43.942 "name": "Nvme1" 00:11:43.942 }, 00:11:43.942 "method": "bdev_nvme_attach_controller" 00:11:43.942 }, 00:11:43.942 { 00:11:43.942 "method": "bdev_wait_for_examine" 00:11:43.942 } 00:11:43.942 ] 00:11:43.942 } 00:11:43.942 ] 00:11:43.942 } 00:11:43.942 [2024-07-22 18:18:55.654293] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:43.942 [2024-07-22 18:18:55.654453] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66820 ] 00:11:43.942 [2024-07-22 18:18:55.815923] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.200 [2024-07-22 18:18:56.070639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.458 [2024-07-22 18:18:56.281756] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:45.650  Copying: 65/65 [MB] (average 1031 MBps) 00:11:45.650 00:11:45.650 18:18:57 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:11:45.650 18:18:57 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:11:45.650 18:18:57 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:11:45.650 18:18:57 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:11:45.908 { 00:11:45.908 "subsystems": [ 00:11:45.908 { 00:11:45.908 "subsystem": "bdev", 00:11:45.908 "config": [ 00:11:45.908 { 00:11:45.908 "params": { 00:11:45.908 "trtype": "pcie", 00:11:45.908 "traddr": "0000:00:10.0", 00:11:45.908 "name": "Nvme0" 00:11:45.908 }, 00:11:45.908 "method": "bdev_nvme_attach_controller" 00:11:45.908 }, 00:11:45.908 { 00:11:45.908 "params": { 00:11:45.908 "trtype": "pcie", 00:11:45.908 "traddr": "0000:00:11.0", 00:11:45.908 "name": "Nvme1" 00:11:45.908 }, 00:11:45.908 "method": "bdev_nvme_attach_controller" 00:11:45.908 }, 00:11:45.908 { 00:11:45.908 "method": "bdev_wait_for_examine" 00:11:45.908 } 00:11:45.908 ] 00:11:45.908 } 00:11:45.908 ] 00:11:45.908 } 00:11:45.908 [2024-07-22 18:18:57.744570] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:45.908 [2024-07-22 18:18:57.744777] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66851 ] 00:11:45.908 [2024-07-22 18:18:57.921876] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.167 [2024-07-22 18:18:58.158400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.425 [2024-07-22 18:18:58.371743] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:48.055  Copying: 1024/1024 [kB] (average 1000 MBps) 00:11:48.055 00:11:48.055 18:18:59 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:11:48.055 18:18:59 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:11:48.055 00:11:48.055 real 0m8.615s 00:11:48.055 user 0m7.240s 00:11:48.055 sys 0m2.735s 00:11:48.055 18:18:59 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:48.055 ************************************ 00:11:48.055 END TEST dd_offset_magic 00:11:48.055 ************************************ 00:11:48.055 18:18:59 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:11:48.055 18:18:59 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:11:48.055 18:18:59 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:11:48.055 18:18:59 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:11:48.055 18:18:59 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:11:48.055 18:18:59 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:11:48.055 18:18:59 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:11:48.055 18:18:59 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:11:48.055 18:18:59 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:11:48.055 18:18:59 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:11:48.055 18:18:59 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:11:48.055 18:18:59 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:11:48.055 18:18:59 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:48.055 { 00:11:48.055 "subsystems": [ 00:11:48.055 { 00:11:48.055 "subsystem": "bdev", 00:11:48.055 "config": [ 00:11:48.055 { 00:11:48.055 "params": { 00:11:48.055 "trtype": "pcie", 00:11:48.055 "traddr": "0000:00:10.0", 00:11:48.055 "name": "Nvme0" 00:11:48.055 }, 00:11:48.055 "method": "bdev_nvme_attach_controller" 00:11:48.055 }, 00:11:48.055 { 00:11:48.055 "params": { 00:11:48.055 "trtype": "pcie", 00:11:48.055 "traddr": "0000:00:11.0", 00:11:48.055 "name": "Nvme1" 00:11:48.055 }, 00:11:48.055 "method": "bdev_nvme_attach_controller" 00:11:48.055 }, 00:11:48.055 { 00:11:48.055 "method": "bdev_wait_for_examine" 00:11:48.055 } 00:11:48.055 ] 00:11:48.055 } 00:11:48.055 ] 00:11:48.055 } 00:11:48.055 [2024-07-22 18:18:59.973757] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:48.055 [2024-07-22 18:18:59.974088] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66901 ] 00:11:48.313 [2024-07-22 18:19:00.139052] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.571 [2024-07-22 18:19:00.381570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.571 [2024-07-22 18:19:00.576598] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:50.206  Copying: 5120/5120 [kB] (average 1000 MBps) 00:11:50.206 00:11:50.206 18:19:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:11:50.206 18:19:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:11:50.206 18:19:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:11:50.206 18:19:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:11:50.206 18:19:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:11:50.206 18:19:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:11:50.206 18:19:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:11:50.206 18:19:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:11:50.206 18:19:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:11:50.206 18:19:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:50.206 { 00:11:50.206 "subsystems": [ 00:11:50.206 { 00:11:50.206 "subsystem": "bdev", 00:11:50.206 "config": [ 00:11:50.206 { 00:11:50.206 "params": { 00:11:50.206 "trtype": "pcie", 00:11:50.206 "traddr": "0000:00:10.0", 00:11:50.206 "name": "Nvme0" 00:11:50.206 }, 00:11:50.206 "method": "bdev_nvme_attach_controller" 00:11:50.206 }, 00:11:50.206 { 00:11:50.206 "params": { 00:11:50.206 "trtype": "pcie", 00:11:50.206 "traddr": "0000:00:11.0", 00:11:50.206 "name": "Nvme1" 00:11:50.206 }, 00:11:50.206 "method": "bdev_nvme_attach_controller" 00:11:50.206 }, 00:11:50.206 { 00:11:50.206 "method": "bdev_wait_for_examine" 00:11:50.206 } 00:11:50.206 ] 00:11:50.206 } 00:11:50.206 ] 00:11:50.206 } 00:11:50.206 [2024-07-22 18:19:01.961281] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:50.206 [2024-07-22 18:19:01.961459] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66934 ] 00:11:50.206 [2024-07-22 18:19:02.139210] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.464 [2024-07-22 18:19:02.390431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.722 [2024-07-22 18:19:02.580839] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:52.357  Copying: 5120/5120 [kB] (average 714 MBps) 00:11:52.357 00:11:52.357 18:19:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:11:52.357 ************************************ 00:11:52.357 END TEST spdk_dd_bdev_to_bdev 00:11:52.357 ************************************ 00:11:52.357 00:11:52.357 real 0m18.638s 00:11:52.357 user 0m15.666s 00:11:52.357 sys 0m8.126s 00:11:52.357 18:19:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:52.357 18:19:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:52.357 18:19:04 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:11:52.357 18:19:04 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:11:52.357 18:19:04 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:11:52.357 18:19:04 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:52.357 18:19:04 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:52.357 18:19:04 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:11:52.357 ************************************ 00:11:52.357 START TEST spdk_dd_uring 00:11:52.357 ************************************ 00:11:52.357 18:19:04 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:11:52.357 * Looking for test storage... 00:11:52.357 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:11:52.357 18:19:04 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:52.357 18:19:04 spdk_dd.spdk_dd_uring -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:52.357 18:19:04 spdk_dd.spdk_dd_uring -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:52.357 18:19:04 spdk_dd.spdk_dd_uring -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:52.357 18:19:04 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.357 18:19:04 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.357 18:19:04 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.357 18:19:04 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:11:52.357 18:19:04 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.357 18:19:04 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:11:52.357 18:19:04 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:52.357 18:19:04 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:52.357 18:19:04 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:11:52.357 ************************************ 00:11:52.357 START TEST dd_uring_copy 00:11:52.357 ************************************ 00:11:52.357 18:19:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1123 -- # uring_zram_copy 00:11:52.357 18:19:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:11:52.357 18:19:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:11:52.357 18:19:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:11:52.357 18:19:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:11:52.357 18:19:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:11:52.357 18:19:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:11:52.357 18:19:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:11:52.358 18:19:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:11:52.358 18:19:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:11:52.358 18:19:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:11:52.358 18:19:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:11:52.358 18:19:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:11:52.358 18:19:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:11:52.358 18:19:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:11:52.358 18:19:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:11:52.358 18:19:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:11:52.358 18:19:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:11:52.358 18:19:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:11:52.358 18:19:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:11:52.358 18:19:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:11:52.358 18:19:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:11:52.358 18:19:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:11:52.358 18:19:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:11:52.358 18:19:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:11:52.358 18:19:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:11:52.358 18:19:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=0ol233n6suisp923v019zz0htmji0li052k5bhsecprq24m3ay98p3aopaz4y4lea483thlizj5s7taevm708ultyv2pil9jmzhtgoqou4c6jxt872wjzzs0150iw2spd21kmy3mxelrs1ca0c6fx2ljxy8j5z4klvou4dudlbcet0ufwe0l2i8nozsm9laujcfwjpr6zeoi69ns4lq8xrmjxidrhoht4whusm7rqrecnbi97497anogln27xr9w8a9sk9sxmj42l0gjux1e7qa39h79rpxs6fpn70f43bbk9877w91xe01uyg5w3kydks2kd86i1pgs7ophwps6k1nef8fx23kgmgas70i127gofbgilugtsfubg81ugbvdvmrioqgq44ey3tm8dp352k3bn41r2qwj2mdmzk4drg3isgv5xq7vhsbv9b2mjs7mej4kvs5w55tkinz7pmb4f0dvvcp4pnvcn64wi604wlq2bty8wnqaqrhczu1ufm0kcpuwwrzops77ghm0uawza1gmjzi3rzhvni3a90yzzn20b1b07sdxjhd4eg9lww3cifntel6mw32xevrgfm17sbxgcw2pt5wp2oltc6vrlrxoia37xny5s7qs3b22zr57wo0lv9nnx9zr9xmzmacdpw845rflcsxzpbux3t4mr0kgcowj05wgnft55zwflv6p8o3mabdwm18we8cb21v2uo46b0cg6u4l94g6cf3jvqjy93ryol2ura5oexlnth1mmvzzk6b85te6fpolvxwikwxvrnymqn45xm7nv06my4f4ma70l1lp3jjj2s3mlntebniebulwlyq5gavq6mho32n9i83rhyjc6inq838g6bc0wn7x1rxhabc8wb6btu4l1goccq1wgm24f4lzqvsho9fnng1zu8u2bc2b7ivr9grxwm59nwgq3hvd4qa4hnft6uzu9l51zp2w0wrjiitww1zhi7gb4ufbnuc1x7lprcnbs8vtwupr3d0wvselqcji 00:11:52.358 18:19:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo 0ol233n6suisp923v019zz0htmji0li052k5bhsecprq24m3ay98p3aopaz4y4lea483thlizj5s7taevm708ultyv2pil9jmzhtgoqou4c6jxt872wjzzs0150iw2spd21kmy3mxelrs1ca0c6fx2ljxy8j5z4klvou4dudlbcet0ufwe0l2i8nozsm9laujcfwjpr6zeoi69ns4lq8xrmjxidrhoht4whusm7rqrecnbi97497anogln27xr9w8a9sk9sxmj42l0gjux1e7qa39h79rpxs6fpn70f43bbk9877w91xe01uyg5w3kydks2kd86i1pgs7ophwps6k1nef8fx23kgmgas70i127gofbgilugtsfubg81ugbvdvmrioqgq44ey3tm8dp352k3bn41r2qwj2mdmzk4drg3isgv5xq7vhsbv9b2mjs7mej4kvs5w55tkinz7pmb4f0dvvcp4pnvcn64wi604wlq2bty8wnqaqrhczu1ufm0kcpuwwrzops77ghm0uawza1gmjzi3rzhvni3a90yzzn20b1b07sdxjhd4eg9lww3cifntel6mw32xevrgfm17sbxgcw2pt5wp2oltc6vrlrxoia37xny5s7qs3b22zr57wo0lv9nnx9zr9xmzmacdpw845rflcsxzpbux3t4mr0kgcowj05wgnft55zwflv6p8o3mabdwm18we8cb21v2uo46b0cg6u4l94g6cf3jvqjy93ryol2ura5oexlnth1mmvzzk6b85te6fpolvxwikwxvrnymqn45xm7nv06my4f4ma70l1lp3jjj2s3mlntebniebulwlyq5gavq6mho32n9i83rhyjc6inq838g6bc0wn7x1rxhabc8wb6btu4l1goccq1wgm24f4lzqvsho9fnng1zu8u2bc2b7ivr9grxwm59nwgq3hvd4qa4hnft6uzu9l51zp2w0wrjiitww1zhi7gb4ufbnuc1x7lprcnbs8vtwupr3d0wvselqcji 00:11:52.358 18:19:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:11:52.358 [2024-07-22 18:19:04.314437] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:52.358 [2024-07-22 18:19:04.314645] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67016 ] 00:11:52.617 [2024-07-22 18:19:04.491932] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.877 [2024-07-22 18:19:04.775429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.136 [2024-07-22 18:19:04.991154] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:56.603  Copying: 511/511 [MB] (average 1462 MBps) 00:11:56.603 00:11:56.603 18:19:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:11:56.603 18:19:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:11:56.603 18:19:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:56.603 18:19:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:11:56.861 { 00:11:56.861 "subsystems": [ 00:11:56.861 { 00:11:56.861 "subsystem": "bdev", 00:11:56.861 "config": [ 00:11:56.861 { 00:11:56.861 "params": { 00:11:56.861 "block_size": 512, 00:11:56.861 "num_blocks": 1048576, 00:11:56.861 "name": "malloc0" 00:11:56.861 }, 00:11:56.861 "method": "bdev_malloc_create" 00:11:56.861 }, 00:11:56.861 { 00:11:56.861 "params": { 00:11:56.861 "filename": "/dev/zram1", 00:11:56.861 "name": "uring0" 00:11:56.861 }, 00:11:56.861 "method": "bdev_uring_create" 00:11:56.861 }, 00:11:56.861 { 00:11:56.861 "method": "bdev_wait_for_examine" 00:11:56.861 } 00:11:56.861 ] 00:11:56.861 } 00:11:56.861 ] 00:11:56.861 } 00:11:56.861 [2024-07-22 18:19:08.677803] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:56.862 [2024-07-22 18:19:08.678006] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67066 ] 00:11:56.862 [2024-07-22 18:19:08.853650] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:57.119 [2024-07-22 18:19:09.102892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.378 [2024-07-22 18:19:09.317338] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:03.679  Copying: 182/512 [MB] (182 MBps) Copying: 353/512 [MB] (171 MBps) Copying: 512/512 [MB] (average 174 MBps) 00:12:03.679 00:12:03.679 18:19:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:12:03.679 18:19:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:12:03.679 18:19:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:03.679 18:19:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:12:03.679 { 00:12:03.679 "subsystems": [ 00:12:03.679 { 00:12:03.679 "subsystem": "bdev", 00:12:03.679 "config": [ 00:12:03.679 { 00:12:03.679 "params": { 00:12:03.679 "block_size": 512, 00:12:03.679 "num_blocks": 1048576, 00:12:03.679 "name": "malloc0" 00:12:03.679 }, 00:12:03.679 "method": "bdev_malloc_create" 00:12:03.679 }, 00:12:03.679 { 00:12:03.679 "params": { 00:12:03.679 "filename": "/dev/zram1", 00:12:03.679 "name": "uring0" 00:12:03.679 }, 00:12:03.679 "method": "bdev_uring_create" 00:12:03.679 }, 00:12:03.679 { 00:12:03.679 "method": "bdev_wait_for_examine" 00:12:03.679 } 00:12:03.679 ] 00:12:03.679 } 00:12:03.679 ] 00:12:03.679 } 00:12:03.679 [2024-07-22 18:19:15.622014] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:12:03.679 [2024-07-22 18:19:15.622237] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67154 ] 00:12:03.938 [2024-07-22 18:19:15.804611] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.196 [2024-07-22 18:19:16.089699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.454 [2024-07-22 18:19:16.301246] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:11.317  Copying: 143/512 [MB] (143 MBps) Copying: 288/512 [MB] (145 MBps) Copying: 413/512 [MB] (125 MBps) Copying: 512/512 [MB] (average 139 MBps) 00:12:11.317 00:12:11.317 18:19:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:12:11.317 18:19:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ 0ol233n6suisp923v019zz0htmji0li052k5bhsecprq24m3ay98p3aopaz4y4lea483thlizj5s7taevm708ultyv2pil9jmzhtgoqou4c6jxt872wjzzs0150iw2spd21kmy3mxelrs1ca0c6fx2ljxy8j5z4klvou4dudlbcet0ufwe0l2i8nozsm9laujcfwjpr6zeoi69ns4lq8xrmjxidrhoht4whusm7rqrecnbi97497anogln27xr9w8a9sk9sxmj42l0gjux1e7qa39h79rpxs6fpn70f43bbk9877w91xe01uyg5w3kydks2kd86i1pgs7ophwps6k1nef8fx23kgmgas70i127gofbgilugtsfubg81ugbvdvmrioqgq44ey3tm8dp352k3bn41r2qwj2mdmzk4drg3isgv5xq7vhsbv9b2mjs7mej4kvs5w55tkinz7pmb4f0dvvcp4pnvcn64wi604wlq2bty8wnqaqrhczu1ufm0kcpuwwrzops77ghm0uawza1gmjzi3rzhvni3a90yzzn20b1b07sdxjhd4eg9lww3cifntel6mw32xevrgfm17sbxgcw2pt5wp2oltc6vrlrxoia37xny5s7qs3b22zr57wo0lv9nnx9zr9xmzmacdpw845rflcsxzpbux3t4mr0kgcowj05wgnft55zwflv6p8o3mabdwm18we8cb21v2uo46b0cg6u4l94g6cf3jvqjy93ryol2ura5oexlnth1mmvzzk6b85te6fpolvxwikwxvrnymqn45xm7nv06my4f4ma70l1lp3jjj2s3mlntebniebulwlyq5gavq6mho32n9i83rhyjc6inq838g6bc0wn7x1rxhabc8wb6btu4l1goccq1wgm24f4lzqvsho9fnng1zu8u2bc2b7ivr9grxwm59nwgq3hvd4qa4hnft6uzu9l51zp2w0wrjiitww1zhi7gb4ufbnuc1x7lprcnbs8vtwupr3d0wvselqcji == \0\o\l\2\3\3\n\6\s\u\i\s\p\9\2\3\v\0\1\9\z\z\0\h\t\m\j\i\0\l\i\0\5\2\k\5\b\h\s\e\c\p\r\q\2\4\m\3\a\y\9\8\p\3\a\o\p\a\z\4\y\4\l\e\a\4\8\3\t\h\l\i\z\j\5\s\7\t\a\e\v\m\7\0\8\u\l\t\y\v\2\p\i\l\9\j\m\z\h\t\g\o\q\o\u\4\c\6\j\x\t\8\7\2\w\j\z\z\s\0\1\5\0\i\w\2\s\p\d\2\1\k\m\y\3\m\x\e\l\r\s\1\c\a\0\c\6\f\x\2\l\j\x\y\8\j\5\z\4\k\l\v\o\u\4\d\u\d\l\b\c\e\t\0\u\f\w\e\0\l\2\i\8\n\o\z\s\m\9\l\a\u\j\c\f\w\j\p\r\6\z\e\o\i\6\9\n\s\4\l\q\8\x\r\m\j\x\i\d\r\h\o\h\t\4\w\h\u\s\m\7\r\q\r\e\c\n\b\i\9\7\4\9\7\a\n\o\g\l\n\2\7\x\r\9\w\8\a\9\s\k\9\s\x\m\j\4\2\l\0\g\j\u\x\1\e\7\q\a\3\9\h\7\9\r\p\x\s\6\f\p\n\7\0\f\4\3\b\b\k\9\8\7\7\w\9\1\x\e\0\1\u\y\g\5\w\3\k\y\d\k\s\2\k\d\8\6\i\1\p\g\s\7\o\p\h\w\p\s\6\k\1\n\e\f\8\f\x\2\3\k\g\m\g\a\s\7\0\i\1\2\7\g\o\f\b\g\i\l\u\g\t\s\f\u\b\g\8\1\u\g\b\v\d\v\m\r\i\o\q\g\q\4\4\e\y\3\t\m\8\d\p\3\5\2\k\3\b\n\4\1\r\2\q\w\j\2\m\d\m\z\k\4\d\r\g\3\i\s\g\v\5\x\q\7\v\h\s\b\v\9\b\2\m\j\s\7\m\e\j\4\k\v\s\5\w\5\5\t\k\i\n\z\7\p\m\b\4\f\0\d\v\v\c\p\4\p\n\v\c\n\6\4\w\i\6\0\4\w\l\q\2\b\t\y\8\w\n\q\a\q\r\h\c\z\u\1\u\f\m\0\k\c\p\u\w\w\r\z\o\p\s\7\7\g\h\m\0\u\a\w\z\a\1\g\m\j\z\i\3\r\z\h\v\n\i\3\a\9\0\y\z\z\n\2\0\b\1\b\0\7\s\d\x\j\h\d\4\e\g\9\l\w\w\3\c\i\f\n\t\e\l\6\m\w\3\2\x\e\v\r\g\f\m\1\7\s\b\x\g\c\w\2\p\t\5\w\p\2\o\l\t\c\6\v\r\l\r\x\o\i\a\3\7\x\n\y\5\s\7\q\s\3\b\2\2\z\r\5\7\w\o\0\l\v\9\n\n\x\9\z\r\9\x\m\z\m\a\c\d\p\w\8\4\5\r\f\l\c\s\x\z\p\b\u\x\3\t\4\m\r\0\k\g\c\o\w\j\0\5\w\g\n\f\t\5\5\z\w\f\l\v\6\p\8\o\3\m\a\b\d\w\m\1\8\w\e\8\c\b\2\1\v\2\u\o\4\6\b\0\c\g\6\u\4\l\9\4\g\6\c\f\3\j\v\q\j\y\9\3\r\y\o\l\2\u\r\a\5\o\e\x\l\n\t\h\1\m\m\v\z\z\k\6\b\8\5\t\e\6\f\p\o\l\v\x\w\i\k\w\x\v\r\n\y\m\q\n\4\5\x\m\7\n\v\0\6\m\y\4\f\4\m\a\7\0\l\1\l\p\3\j\j\j\2\s\3\m\l\n\t\e\b\n\i\e\b\u\l\w\l\y\q\5\g\a\v\q\6\m\h\o\3\2\n\9\i\8\3\r\h\y\j\c\6\i\n\q\8\3\8\g\6\b\c\0\w\n\7\x\1\r\x\h\a\b\c\8\w\b\6\b\t\u\4\l\1\g\o\c\c\q\1\w\g\m\2\4\f\4\l\z\q\v\s\h\o\9\f\n\n\g\1\z\u\8\u\2\b\c\2\b\7\i\v\r\9\g\r\x\w\m\5\9\n\w\g\q\3\h\v\d\4\q\a\4\h\n\f\t\6\u\z\u\9\l\5\1\z\p\2\w\0\w\r\j\i\i\t\w\w\1\z\h\i\7\g\b\4\u\f\b\n\u\c\1\x\7\l\p\r\c\n\b\s\8\v\t\w\u\p\r\3\d\0\w\v\s\e\l\q\c\j\i ]] 00:12:11.317 18:19:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:12:11.317 18:19:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ 0ol233n6suisp923v019zz0htmji0li052k5bhsecprq24m3ay98p3aopaz4y4lea483thlizj5s7taevm708ultyv2pil9jmzhtgoqou4c6jxt872wjzzs0150iw2spd21kmy3mxelrs1ca0c6fx2ljxy8j5z4klvou4dudlbcet0ufwe0l2i8nozsm9laujcfwjpr6zeoi69ns4lq8xrmjxidrhoht4whusm7rqrecnbi97497anogln27xr9w8a9sk9sxmj42l0gjux1e7qa39h79rpxs6fpn70f43bbk9877w91xe01uyg5w3kydks2kd86i1pgs7ophwps6k1nef8fx23kgmgas70i127gofbgilugtsfubg81ugbvdvmrioqgq44ey3tm8dp352k3bn41r2qwj2mdmzk4drg3isgv5xq7vhsbv9b2mjs7mej4kvs5w55tkinz7pmb4f0dvvcp4pnvcn64wi604wlq2bty8wnqaqrhczu1ufm0kcpuwwrzops77ghm0uawza1gmjzi3rzhvni3a90yzzn20b1b07sdxjhd4eg9lww3cifntel6mw32xevrgfm17sbxgcw2pt5wp2oltc6vrlrxoia37xny5s7qs3b22zr57wo0lv9nnx9zr9xmzmacdpw845rflcsxzpbux3t4mr0kgcowj05wgnft55zwflv6p8o3mabdwm18we8cb21v2uo46b0cg6u4l94g6cf3jvqjy93ryol2ura5oexlnth1mmvzzk6b85te6fpolvxwikwxvrnymqn45xm7nv06my4f4ma70l1lp3jjj2s3mlntebniebulwlyq5gavq6mho32n9i83rhyjc6inq838g6bc0wn7x1rxhabc8wb6btu4l1goccq1wgm24f4lzqvsho9fnng1zu8u2bc2b7ivr9grxwm59nwgq3hvd4qa4hnft6uzu9l51zp2w0wrjiitww1zhi7gb4ufbnuc1x7lprcnbs8vtwupr3d0wvselqcji == \0\o\l\2\3\3\n\6\s\u\i\s\p\9\2\3\v\0\1\9\z\z\0\h\t\m\j\i\0\l\i\0\5\2\k\5\b\h\s\e\c\p\r\q\2\4\m\3\a\y\9\8\p\3\a\o\p\a\z\4\y\4\l\e\a\4\8\3\t\h\l\i\z\j\5\s\7\t\a\e\v\m\7\0\8\u\l\t\y\v\2\p\i\l\9\j\m\z\h\t\g\o\q\o\u\4\c\6\j\x\t\8\7\2\w\j\z\z\s\0\1\5\0\i\w\2\s\p\d\2\1\k\m\y\3\m\x\e\l\r\s\1\c\a\0\c\6\f\x\2\l\j\x\y\8\j\5\z\4\k\l\v\o\u\4\d\u\d\l\b\c\e\t\0\u\f\w\e\0\l\2\i\8\n\o\z\s\m\9\l\a\u\j\c\f\w\j\p\r\6\z\e\o\i\6\9\n\s\4\l\q\8\x\r\m\j\x\i\d\r\h\o\h\t\4\w\h\u\s\m\7\r\q\r\e\c\n\b\i\9\7\4\9\7\a\n\o\g\l\n\2\7\x\r\9\w\8\a\9\s\k\9\s\x\m\j\4\2\l\0\g\j\u\x\1\e\7\q\a\3\9\h\7\9\r\p\x\s\6\f\p\n\7\0\f\4\3\b\b\k\9\8\7\7\w\9\1\x\e\0\1\u\y\g\5\w\3\k\y\d\k\s\2\k\d\8\6\i\1\p\g\s\7\o\p\h\w\p\s\6\k\1\n\e\f\8\f\x\2\3\k\g\m\g\a\s\7\0\i\1\2\7\g\o\f\b\g\i\l\u\g\t\s\f\u\b\g\8\1\u\g\b\v\d\v\m\r\i\o\q\g\q\4\4\e\y\3\t\m\8\d\p\3\5\2\k\3\b\n\4\1\r\2\q\w\j\2\m\d\m\z\k\4\d\r\g\3\i\s\g\v\5\x\q\7\v\h\s\b\v\9\b\2\m\j\s\7\m\e\j\4\k\v\s\5\w\5\5\t\k\i\n\z\7\p\m\b\4\f\0\d\v\v\c\p\4\p\n\v\c\n\6\4\w\i\6\0\4\w\l\q\2\b\t\y\8\w\n\q\a\q\r\h\c\z\u\1\u\f\m\0\k\c\p\u\w\w\r\z\o\p\s\7\7\g\h\m\0\u\a\w\z\a\1\g\m\j\z\i\3\r\z\h\v\n\i\3\a\9\0\y\z\z\n\2\0\b\1\b\0\7\s\d\x\j\h\d\4\e\g\9\l\w\w\3\c\i\f\n\t\e\l\6\m\w\3\2\x\e\v\r\g\f\m\1\7\s\b\x\g\c\w\2\p\t\5\w\p\2\o\l\t\c\6\v\r\l\r\x\o\i\a\3\7\x\n\y\5\s\7\q\s\3\b\2\2\z\r\5\7\w\o\0\l\v\9\n\n\x\9\z\r\9\x\m\z\m\a\c\d\p\w\8\4\5\r\f\l\c\s\x\z\p\b\u\x\3\t\4\m\r\0\k\g\c\o\w\j\0\5\w\g\n\f\t\5\5\z\w\f\l\v\6\p\8\o\3\m\a\b\d\w\m\1\8\w\e\8\c\b\2\1\v\2\u\o\4\6\b\0\c\g\6\u\4\l\9\4\g\6\c\f\3\j\v\q\j\y\9\3\r\y\o\l\2\u\r\a\5\o\e\x\l\n\t\h\1\m\m\v\z\z\k\6\b\8\5\t\e\6\f\p\o\l\v\x\w\i\k\w\x\v\r\n\y\m\q\n\4\5\x\m\7\n\v\0\6\m\y\4\f\4\m\a\7\0\l\1\l\p\3\j\j\j\2\s\3\m\l\n\t\e\b\n\i\e\b\u\l\w\l\y\q\5\g\a\v\q\6\m\h\o\3\2\n\9\i\8\3\r\h\y\j\c\6\i\n\q\8\3\8\g\6\b\c\0\w\n\7\x\1\r\x\h\a\b\c\8\w\b\6\b\t\u\4\l\1\g\o\c\c\q\1\w\g\m\2\4\f\4\l\z\q\v\s\h\o\9\f\n\n\g\1\z\u\8\u\2\b\c\2\b\7\i\v\r\9\g\r\x\w\m\5\9\n\w\g\q\3\h\v\d\4\q\a\4\h\n\f\t\6\u\z\u\9\l\5\1\z\p\2\w\0\w\r\j\i\i\t\w\w\1\z\h\i\7\g\b\4\u\f\b\n\u\c\1\x\7\l\p\r\c\n\b\s\8\v\t\w\u\p\r\3\d\0\w\v\s\e\l\q\c\j\i ]] 00:12:11.317 18:19:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:12:11.883 18:19:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:12:11.883 18:19:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:12:11.883 18:19:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:11.883 18:19:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:12:11.883 { 00:12:11.883 "subsystems": [ 00:12:11.883 { 00:12:11.883 "subsystem": "bdev", 00:12:11.883 "config": [ 00:12:11.883 { 00:12:11.883 "params": { 00:12:11.883 "block_size": 512, 00:12:11.883 "num_blocks": 1048576, 00:12:11.883 "name": "malloc0" 00:12:11.883 }, 00:12:11.883 "method": "bdev_malloc_create" 00:12:11.883 }, 00:12:11.883 { 00:12:11.883 "params": { 00:12:11.883 "filename": "/dev/zram1", 00:12:11.883 "name": "uring0" 00:12:11.883 }, 00:12:11.883 "method": "bdev_uring_create" 00:12:11.883 }, 00:12:11.883 { 00:12:11.883 "method": "bdev_wait_for_examine" 00:12:11.883 } 00:12:11.883 ] 00:12:11.883 } 00:12:11.883 ] 00:12:11.883 } 00:12:11.883 [2024-07-22 18:19:23.740512] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:12:11.883 [2024-07-22 18:19:23.740718] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67289 ] 00:12:12.142 [2024-07-22 18:19:23.917929] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.400 [2024-07-22 18:19:24.163614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.400 [2024-07-22 18:19:24.370896] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:20.291  Copying: 118/512 [MB] (118 MBps) Copying: 232/512 [MB] (113 MBps) Copying: 347/512 [MB] (115 MBps) Copying: 460/512 [MB] (112 MBps) Copying: 512/512 [MB] (average 115 MBps) 00:12:20.291 00:12:20.291 18:19:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:12:20.291 18:19:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:12:20.291 18:19:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:12:20.291 18:19:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:12:20.291 18:19:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:12:20.291 18:19:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:12:20.291 18:19:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:20.291 18:19:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:12:20.291 { 00:12:20.291 "subsystems": [ 00:12:20.291 { 00:12:20.291 "subsystem": "bdev", 00:12:20.291 "config": [ 00:12:20.291 { 00:12:20.291 "params": { 00:12:20.291 "block_size": 512, 00:12:20.291 "num_blocks": 1048576, 00:12:20.291 "name": "malloc0" 00:12:20.291 }, 00:12:20.291 "method": "bdev_malloc_create" 00:12:20.291 }, 00:12:20.291 { 00:12:20.291 "params": { 00:12:20.291 "filename": "/dev/zram1", 00:12:20.291 "name": "uring0" 00:12:20.291 }, 00:12:20.291 "method": "bdev_uring_create" 00:12:20.291 }, 00:12:20.291 { 00:12:20.291 "params": { 00:12:20.291 "name": "uring0" 00:12:20.291 }, 00:12:20.291 "method": "bdev_uring_delete" 00:12:20.291 }, 00:12:20.291 { 00:12:20.291 "method": "bdev_wait_for_examine" 00:12:20.291 } 00:12:20.291 ] 00:12:20.291 } 00:12:20.291 ] 00:12:20.291 } 00:12:20.291 [2024-07-22 18:19:32.137717] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:12:20.291 [2024-07-22 18:19:32.137885] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67387 ] 00:12:20.550 [2024-07-22 18:19:32.314274] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.550 [2024-07-22 18:19:32.554475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.809 [2024-07-22 18:19:32.760431] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:24.348  Copying: 0/0 [B] (average 0 Bps) 00:12:24.348 00:12:24.348 18:19:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:12:24.348 18:19:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:12:24.348 18:19:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@648 -- # local es=0 00:12:24.348 18:19:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:12:24.348 18:19:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:12:24.348 18:19:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:24.348 18:19:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:12:24.348 18:19:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:24.348 18:19:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:24.348 18:19:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:24.348 18:19:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:24.348 18:19:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:24.348 18:19:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:24.348 18:19:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:24.348 18:19:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:24.348 18:19:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:12:24.348 { 00:12:24.348 "subsystems": [ 00:12:24.348 { 00:12:24.348 "subsystem": "bdev", 00:12:24.348 "config": [ 00:12:24.348 { 00:12:24.348 "params": { 00:12:24.348 "block_size": 512, 00:12:24.348 "num_blocks": 1048576, 00:12:24.348 "name": "malloc0" 00:12:24.348 }, 00:12:24.348 "method": "bdev_malloc_create" 00:12:24.348 }, 00:12:24.348 { 00:12:24.348 "params": { 00:12:24.348 "filename": "/dev/zram1", 00:12:24.348 "name": "uring0" 00:12:24.348 }, 00:12:24.348 "method": "bdev_uring_create" 00:12:24.348 }, 00:12:24.348 { 00:12:24.348 "params": { 00:12:24.348 "name": "uring0" 00:12:24.348 }, 00:12:24.348 "method": "bdev_uring_delete" 00:12:24.348 }, 00:12:24.348 { 00:12:24.348 "method": "bdev_wait_for_examine" 00:12:24.348 } 00:12:24.348 ] 00:12:24.348 } 00:12:24.348 ] 00:12:24.348 } 00:12:24.348 [2024-07-22 18:19:36.023450] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:12:24.348 [2024-07-22 18:19:36.023857] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67451 ] 00:12:24.348 [2024-07-22 18:19:36.187027] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.607 [2024-07-22 18:19:36.429980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.866 [2024-07-22 18:19:36.637958] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:25.431 [2024-07-22 18:19:37.299470] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:12:25.431 [2024-07-22 18:19:37.299543] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:12:25.431 [2024-07-22 18:19:37.299599] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:12:25.431 [2024-07-22 18:19:37.299620] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:27.382 [2024-07-22 18:19:39.386134] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:12:27.949 18:19:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # es=237 00:12:27.949 18:19:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:27.949 18:19:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@660 -- # es=109 00:12:27.949 18:19:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # case "$es" in 00:12:27.949 18:19:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@668 -- # es=1 00:12:27.949 18:19:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:27.949 18:19:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:12:27.949 18:19:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:12:27.949 18:19:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:12:27.949 18:19:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:12:27.949 18:19:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:12:27.949 18:19:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:12:28.207 00:12:28.207 real 0m35.914s 00:12:28.207 user 0m29.419s 00:12:28.207 sys 0m17.855s 00:12:28.207 18:19:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:28.207 18:19:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:12:28.207 ************************************ 00:12:28.207 END TEST dd_uring_copy 00:12:28.207 ************************************ 00:12:28.207 18:19:40 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1142 -- # return 0 00:12:28.207 ************************************ 00:12:28.207 END TEST spdk_dd_uring 00:12:28.207 ************************************ 00:12:28.207 00:12:28.207 real 0m36.050s 00:12:28.207 user 0m29.477s 00:12:28.207 sys 0m17.934s 00:12:28.207 18:19:40 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:28.207 18:19:40 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:12:28.207 18:19:40 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:12:28.207 18:19:40 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:12:28.207 18:19:40 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:28.207 18:19:40 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:28.207 18:19:40 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:12:28.207 ************************************ 00:12:28.207 START TEST spdk_dd_sparse 00:12:28.207 ************************************ 00:12:28.207 18:19:40 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:12:28.466 * Looking for test storage... 00:12:28.466 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:12:28.466 18:19:40 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:28.466 18:19:40 spdk_dd.spdk_dd_sparse -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:28.466 18:19:40 spdk_dd.spdk_dd_sparse -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:28.466 18:19:40 spdk_dd.spdk_dd_sparse -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:28.466 18:19:40 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.466 18:19:40 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.466 18:19:40 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.466 18:19:40 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:12:28.466 18:19:40 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.466 18:19:40 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:12:28.466 18:19:40 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:12:28.466 18:19:40 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:12:28.466 18:19:40 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:12:28.466 18:19:40 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:12:28.466 18:19:40 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:12:28.466 18:19:40 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:12:28.466 18:19:40 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:12:28.466 18:19:40 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:12:28.466 18:19:40 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:12:28.466 18:19:40 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:12:28.466 1+0 records in 00:12:28.466 1+0 records out 00:12:28.466 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00594307 s, 706 MB/s 00:12:28.466 18:19:40 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:12:28.466 1+0 records in 00:12:28.466 1+0 records out 00:12:28.466 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00592584 s, 708 MB/s 00:12:28.466 18:19:40 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:12:28.466 1+0 records in 00:12:28.466 1+0 records out 00:12:28.466 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00731188 s, 574 MB/s 00:12:28.466 18:19:40 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:12:28.466 18:19:40 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:28.466 18:19:40 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:28.466 18:19:40 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:12:28.466 ************************************ 00:12:28.466 START TEST dd_sparse_file_to_file 00:12:28.466 ************************************ 00:12:28.466 18:19:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1123 -- # file_to_file 00:12:28.466 18:19:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:12:28.466 18:19:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:12:28.466 18:19:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:12:28.466 18:19:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:12:28.466 18:19:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:12:28.466 18:19:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:12:28.466 18:19:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:12:28.466 18:19:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:12:28.466 18:19:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:12:28.466 18:19:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:12:28.466 { 00:12:28.466 "subsystems": [ 00:12:28.466 { 00:12:28.466 "subsystem": "bdev", 00:12:28.466 "config": [ 00:12:28.466 { 00:12:28.466 "params": { 00:12:28.466 "block_size": 4096, 00:12:28.466 "filename": "dd_sparse_aio_disk", 00:12:28.466 "name": "dd_aio" 00:12:28.466 }, 00:12:28.467 "method": "bdev_aio_create" 00:12:28.467 }, 00:12:28.467 { 00:12:28.467 "params": { 00:12:28.467 "lvs_name": "dd_lvstore", 00:12:28.467 "bdev_name": "dd_aio" 00:12:28.467 }, 00:12:28.467 "method": "bdev_lvol_create_lvstore" 00:12:28.467 }, 00:12:28.467 { 00:12:28.467 "method": "bdev_wait_for_examine" 00:12:28.467 } 00:12:28.467 ] 00:12:28.467 } 00:12:28.467 ] 00:12:28.467 } 00:12:28.467 [2024-07-22 18:19:40.397029] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:12:28.467 [2024-07-22 18:19:40.397408] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67571 ] 00:12:28.725 [2024-07-22 18:19:40.570835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:28.983 [2024-07-22 18:19:40.813822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.241 [2024-07-22 18:19:41.015828] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:30.638  Copying: 12/36 [MB] (average 857 MBps) 00:12:30.638 00:12:30.638 18:19:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:12:30.638 18:19:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:12:30.638 18:19:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:12:30.638 18:19:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:12:30.638 18:19:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:12:30.638 18:19:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:12:30.638 18:19:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:12:30.638 18:19:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:12:30.638 ************************************ 00:12:30.638 END TEST dd_sparse_file_to_file 00:12:30.638 ************************************ 00:12:30.638 18:19:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:12:30.638 18:19:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:12:30.638 00:12:30.638 real 0m2.179s 00:12:30.638 user 0m1.783s 00:12:30.638 sys 0m1.103s 00:12:30.638 18:19:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:30.638 18:19:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:12:30.638 18:19:42 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:12:30.638 18:19:42 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:12:30.638 18:19:42 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:30.638 18:19:42 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:30.638 18:19:42 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:12:30.639 ************************************ 00:12:30.639 START TEST dd_sparse_file_to_bdev 00:12:30.639 ************************************ 00:12:30.639 18:19:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1123 -- # file_to_bdev 00:12:30.639 18:19:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:12:30.639 18:19:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:12:30.639 18:19:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:12:30.639 18:19:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:12:30.639 18:19:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:12:30.639 18:19:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:12:30.639 18:19:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:12:30.639 18:19:42 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:12:30.639 { 00:12:30.639 "subsystems": [ 00:12:30.639 { 00:12:30.639 "subsystem": "bdev", 00:12:30.639 "config": [ 00:12:30.639 { 00:12:30.639 "params": { 00:12:30.639 "block_size": 4096, 00:12:30.639 "filename": "dd_sparse_aio_disk", 00:12:30.639 "name": "dd_aio" 00:12:30.639 }, 00:12:30.639 "method": "bdev_aio_create" 00:12:30.639 }, 00:12:30.639 { 00:12:30.639 "params": { 00:12:30.639 "lvs_name": "dd_lvstore", 00:12:30.639 "lvol_name": "dd_lvol", 00:12:30.639 "size_in_mib": 36, 00:12:30.639 "thin_provision": true 00:12:30.639 }, 00:12:30.639 "method": "bdev_lvol_create" 00:12:30.639 }, 00:12:30.639 { 00:12:30.639 "method": "bdev_wait_for_examine" 00:12:30.639 } 00:12:30.639 ] 00:12:30.639 } 00:12:30.639 ] 00:12:30.639 } 00:12:30.639 [2024-07-22 18:19:42.635977] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:12:30.639 [2024-07-22 18:19:42.636175] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67636 ] 00:12:30.898 [2024-07-22 18:19:42.802721] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.156 [2024-07-22 18:19:43.040468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.427 [2024-07-22 18:19:43.238509] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:32.809  Copying: 12/36 [MB] (average 500 MBps) 00:12:32.809 00:12:32.809 ************************************ 00:12:32.809 END TEST dd_sparse_file_to_bdev 00:12:32.809 ************************************ 00:12:32.809 00:12:32.809 real 0m2.134s 00:12:32.809 user 0m1.763s 00:12:32.809 sys 0m1.077s 00:12:32.809 18:19:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:32.809 18:19:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:12:32.809 18:19:44 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:12:32.809 18:19:44 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:12:32.809 18:19:44 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:32.809 18:19:44 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:32.809 18:19:44 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:12:32.809 ************************************ 00:12:32.809 START TEST dd_sparse_bdev_to_file 00:12:32.809 ************************************ 00:12:32.809 18:19:44 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1123 -- # bdev_to_file 00:12:32.809 18:19:44 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:12:32.809 18:19:44 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:12:32.809 18:19:44 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:12:32.809 18:19:44 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:12:32.809 18:19:44 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:12:32.809 18:19:44 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:12:32.809 18:19:44 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:12:32.809 18:19:44 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:12:32.809 { 00:12:32.809 "subsystems": [ 00:12:32.809 { 00:12:32.809 "subsystem": "bdev", 00:12:32.809 "config": [ 00:12:32.809 { 00:12:32.809 "params": { 00:12:32.809 "block_size": 4096, 00:12:32.809 "filename": "dd_sparse_aio_disk", 00:12:32.809 "name": "dd_aio" 00:12:32.809 }, 00:12:32.809 "method": "bdev_aio_create" 00:12:32.809 }, 00:12:32.809 { 00:12:32.809 "method": "bdev_wait_for_examine" 00:12:32.809 } 00:12:32.809 ] 00:12:32.809 } 00:12:32.809 ] 00:12:32.809 } 00:12:33.067 [2024-07-22 18:19:44.846096] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:12:33.068 [2024-07-22 18:19:44.846296] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67688 ] 00:12:33.068 [2024-07-22 18:19:45.023017] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.326 [2024-07-22 18:19:45.254247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.584 [2024-07-22 18:19:45.450772] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:35.221  Copying: 12/36 [MB] (average 1000 MBps) 00:12:35.221 00:12:35.221 18:19:46 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:12:35.221 18:19:46 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:12:35.221 18:19:46 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:12:35.221 18:19:46 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:12:35.221 18:19:46 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:12:35.221 18:19:46 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:12:35.221 18:19:46 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:12:35.221 18:19:46 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:12:35.221 ************************************ 00:12:35.221 END TEST dd_sparse_bdev_to_file 00:12:35.221 ************************************ 00:12:35.221 18:19:46 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:12:35.221 18:19:46 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:12:35.221 00:12:35.221 real 0m2.136s 00:12:35.221 user 0m1.740s 00:12:35.221 sys 0m1.091s 00:12:35.221 18:19:46 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:35.221 18:19:46 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:12:35.221 18:19:46 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:12:35.221 18:19:46 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:12:35.221 18:19:46 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:12:35.221 18:19:46 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:12:35.221 18:19:46 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:12:35.221 18:19:46 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:12:35.221 00:12:35.221 real 0m6.751s 00:12:35.221 user 0m5.381s 00:12:35.221 sys 0m3.458s 00:12:35.221 18:19:46 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:35.221 18:19:46 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:12:35.221 ************************************ 00:12:35.221 END TEST spdk_dd_sparse 00:12:35.221 ************************************ 00:12:35.221 18:19:46 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:12:35.221 18:19:46 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:12:35.221 18:19:46 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:35.221 18:19:46 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:35.221 18:19:46 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:12:35.221 ************************************ 00:12:35.221 START TEST spdk_dd_negative 00:12:35.221 ************************************ 00:12:35.221 18:19:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:12:35.221 * Looking for test storage... 00:12:35.221 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:12:35.221 18:19:47 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:35.221 18:19:47 spdk_dd.spdk_dd_negative -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:35.221 18:19:47 spdk_dd.spdk_dd_negative -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:35.221 18:19:47 spdk_dd.spdk_dd_negative -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:35.221 18:19:47 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.221 18:19:47 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.221 18:19:47 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.221 18:19:47 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:12:35.221 18:19:47 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.221 18:19:47 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:12:35.221 18:19:47 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:12:35.222 18:19:47 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:12:35.222 18:19:47 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:12:35.222 18:19:47 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:12:35.222 18:19:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:35.222 18:19:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:35.222 18:19:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:35.222 ************************************ 00:12:35.222 START TEST dd_invalid_arguments 00:12:35.222 ************************************ 00:12:35.222 18:19:47 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1123 -- # invalid_arguments 00:12:35.222 18:19:47 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:12:35.222 18:19:47 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@648 -- # local es=0 00:12:35.222 18:19:47 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:12:35.222 18:19:47 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:35.222 18:19:47 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:35.222 18:19:47 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:35.222 18:19:47 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:35.222 18:19:47 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:35.222 18:19:47 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:35.222 18:19:47 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:35.222 18:19:47 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:35.222 18:19:47 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:12:35.222 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:12:35.222 00:12:35.222 CPU options: 00:12:35.222 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:12:35.222 (like [0,1,10]) 00:12:35.222 --lcores lcore to CPU mapping list. The list is in the format: 00:12:35.222 [<,lcores[@CPUs]>...] 00:12:35.222 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:12:35.222 Within the group, '-' is used for range separator, 00:12:35.222 ',' is used for single number separator. 00:12:35.222 '( )' can be omitted for single element group, 00:12:35.222 '@' can be omitted if cpus and lcores have the same value 00:12:35.222 --disable-cpumask-locks Disable CPU core lock files. 00:12:35.222 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:12:35.222 pollers in the app support interrupt mode) 00:12:35.222 -p, --main-core main (primary) core for DPDK 00:12:35.222 00:12:35.222 Configuration options: 00:12:35.222 -c, --config, --json JSON config file 00:12:35.222 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:12:35.222 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:12:35.222 --wait-for-rpc wait for RPCs to initialize subsystems 00:12:35.222 --rpcs-allowed comma-separated list of permitted RPCS 00:12:35.222 --json-ignore-init-errors don't exit on invalid config entry 00:12:35.222 00:12:35.222 Memory options: 00:12:35.222 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:12:35.222 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:12:35.222 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:12:35.222 -R, --huge-unlink unlink huge files after initialization 00:12:35.222 -n, --mem-channels number of memory channels used for DPDK 00:12:35.222 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:12:35.222 --msg-mempool-size global message memory pool size in count (default: 262143) 00:12:35.222 --no-huge run without using hugepages 00:12:35.222 -i, --shm-id shared memory ID (optional) 00:12:35.222 -g, --single-file-segments force creating just one hugetlbfs file 00:12:35.222 00:12:35.222 PCI options: 00:12:35.222 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:12:35.222 -B, --pci-blocked pci addr to block (can be used more than once) 00:12:35.222 -u, --no-pci disable PCI access 00:12:35.222 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:12:35.222 00:12:35.222 Log options: 00:12:35.222 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:12:35.222 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:12:35.222 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:12:35.222 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:12:35.222 blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, 00:12:35.222 json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, 00:12:35.222 nvme_auth, nvme_cuse, nvme_vfio, opal, reactor, rpc, rpc_client, scsi, 00:12:35.222 sock, sock_posix, thread, trace, uring, vbdev_delay, vbdev_gpt, 00:12:35.222 vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, 00:12:35.222 vfio_pci, vfio_user, vfu, vfu_virtio, vfu_virtio_blk, vfu_virtio_io, 00:12:35.222 vfu_virtio_scsi, vfu_virtio_scsi_data, virtio, virtio_blk, virtio_dev, 00:12:35.222 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:12:35.222 --silence-noticelog disable notice level logging to stderr 00:12:35.222 00:12:35.222 Trace options: 00:12:35.222 --num-trace-entries number of trace entries for each core, must be power of 2, 00:12:35.222 setting 0 to disable trace (default 32768) 00:12:35.222 Tracepoints vary in size and can use more than one trace entry. 00:12:35.222 -e, --tpoint-group [: 128 )) 00:12:35.482 18:19:47 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:35.482 18:19:47 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:35.482 00:12:35.482 real 0m0.168s 00:12:35.482 user 0m0.094s 00:12:35.482 sys 0m0.073s 00:12:35.482 18:19:47 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:35.482 ************************************ 00:12:35.482 END TEST dd_double_input 00:12:35.482 ************************************ 00:12:35.482 18:19:47 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:12:35.742 18:19:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:12:35.742 18:19:47 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:12:35.742 18:19:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:35.742 18:19:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:35.742 18:19:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:35.742 ************************************ 00:12:35.742 START TEST dd_double_output 00:12:35.742 ************************************ 00:12:35.742 18:19:47 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1123 -- # double_output 00:12:35.742 18:19:47 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:12:35.742 18:19:47 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@648 -- # local es=0 00:12:35.742 18:19:47 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:12:35.742 18:19:47 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:35.742 18:19:47 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:35.742 18:19:47 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:35.742 18:19:47 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:35.742 18:19:47 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:35.742 18:19:47 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:35.742 18:19:47 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:35.742 18:19:47 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:35.742 18:19:47 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:12:35.742 [2024-07-22 18:19:47.599869] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:12:35.742 18:19:47 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # es=22 00:12:35.742 18:19:47 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:35.742 18:19:47 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:35.742 18:19:47 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:35.742 00:12:35.742 real 0m0.146s 00:12:35.742 user 0m0.079s 00:12:35.742 sys 0m0.066s 00:12:35.742 18:19:47 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:35.742 ************************************ 00:12:35.742 END TEST dd_double_output 00:12:35.742 ************************************ 00:12:35.742 18:19:47 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:12:35.742 18:19:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:12:35.742 18:19:47 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:12:35.742 18:19:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:35.742 18:19:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:35.742 18:19:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:35.742 ************************************ 00:12:35.742 START TEST dd_no_input 00:12:35.742 ************************************ 00:12:35.742 18:19:47 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1123 -- # no_input 00:12:35.742 18:19:47 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:12:35.742 18:19:47 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@648 -- # local es=0 00:12:35.742 18:19:47 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:12:35.742 18:19:47 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:35.742 18:19:47 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:35.742 18:19:47 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:35.742 18:19:47 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:35.742 18:19:47 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:35.742 18:19:47 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:35.743 18:19:47 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:35.743 18:19:47 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:35.743 18:19:47 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:12:36.001 [2024-07-22 18:19:47.798091] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:12:36.001 18:19:47 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # es=22 00:12:36.001 18:19:47 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:36.001 18:19:47 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:36.001 18:19:47 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:36.001 00:12:36.001 real 0m0.146s 00:12:36.001 user 0m0.072s 00:12:36.001 sys 0m0.073s 00:12:36.001 18:19:47 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:36.001 ************************************ 00:12:36.001 END TEST dd_no_input 00:12:36.001 ************************************ 00:12:36.001 18:19:47 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:12:36.001 18:19:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:12:36.001 18:19:47 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:12:36.001 18:19:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:36.001 18:19:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:36.001 18:19:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:36.001 ************************************ 00:12:36.001 START TEST dd_no_output 00:12:36.001 ************************************ 00:12:36.001 18:19:47 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1123 -- # no_output 00:12:36.001 18:19:47 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:12:36.001 18:19:47 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@648 -- # local es=0 00:12:36.001 18:19:47 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:12:36.002 18:19:47 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:36.002 18:19:47 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:36.002 18:19:47 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:36.002 18:19:47 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:36.002 18:19:47 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:36.002 18:19:47 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:36.002 18:19:47 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:36.002 18:19:47 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:36.002 18:19:47 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:12:36.002 [2024-07-22 18:19:48.004089] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:12:36.261 18:19:48 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # es=22 00:12:36.261 18:19:48 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:36.261 18:19:48 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:36.261 18:19:48 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:36.261 00:12:36.261 real 0m0.166s 00:12:36.261 user 0m0.087s 00:12:36.261 sys 0m0.077s 00:12:36.261 18:19:48 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:36.261 ************************************ 00:12:36.261 END TEST dd_no_output 00:12:36.261 ************************************ 00:12:36.261 18:19:48 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:12:36.261 18:19:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:12:36.261 18:19:48 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:12:36.261 18:19:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:36.261 18:19:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:36.261 18:19:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:36.261 ************************************ 00:12:36.261 START TEST dd_wrong_blocksize 00:12:36.261 ************************************ 00:12:36.261 18:19:48 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1123 -- # wrong_blocksize 00:12:36.261 18:19:48 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:12:36.261 18:19:48 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:12:36.261 18:19:48 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:12:36.261 18:19:48 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:36.261 18:19:48 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:36.261 18:19:48 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:36.261 18:19:48 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:36.261 18:19:48 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:36.261 18:19:48 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:36.261 18:19:48 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:36.261 18:19:48 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:36.261 18:19:48 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:12:36.261 [2024-07-22 18:19:48.199778] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:12:36.261 18:19:48 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # es=22 00:12:36.261 18:19:48 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:36.261 18:19:48 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:36.261 18:19:48 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:36.261 00:12:36.261 real 0m0.144s 00:12:36.261 user 0m0.080s 00:12:36.261 sys 0m0.063s 00:12:36.261 18:19:48 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:36.261 18:19:48 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:12:36.261 ************************************ 00:12:36.261 END TEST dd_wrong_blocksize 00:12:36.261 ************************************ 00:12:36.520 18:19:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:12:36.520 18:19:48 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:12:36.520 18:19:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:36.520 18:19:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:36.520 18:19:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:36.520 ************************************ 00:12:36.520 START TEST dd_smaller_blocksize 00:12:36.520 ************************************ 00:12:36.520 18:19:48 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1123 -- # smaller_blocksize 00:12:36.520 18:19:48 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:12:36.520 18:19:48 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:12:36.520 18:19:48 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:12:36.520 18:19:48 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:36.520 18:19:48 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:36.520 18:19:48 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:36.520 18:19:48 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:36.520 18:19:48 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:36.520 18:19:48 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:36.520 18:19:48 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:36.520 18:19:48 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:36.520 18:19:48 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:12:36.520 [2024-07-22 18:19:48.412530] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:12:36.520 [2024-07-22 18:19:48.412732] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67932 ] 00:12:36.778 [2024-07-22 18:19:48.587304] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.036 [2024-07-22 18:19:48.840882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.036 [2024-07-22 18:19:49.045899] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:37.601 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:12:37.601 [2024-07-22 18:19:49.522088] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:12:37.601 [2024-07-22 18:19:49.522224] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:38.534 [2024-07-22 18:19:50.286354] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:12:38.792 18:19:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # es=244 00:12:38.792 18:19:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:38.792 18:19:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@660 -- # es=116 00:12:38.792 18:19:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # case "$es" in 00:12:38.792 18:19:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@668 -- # es=1 00:12:38.792 18:19:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:38.792 00:12:38.792 real 0m2.434s 00:12:38.792 user 0m1.779s 00:12:38.792 sys 0m0.538s 00:12:38.792 18:19:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:38.792 ************************************ 00:12:38.792 END TEST dd_smaller_blocksize 00:12:38.792 ************************************ 00:12:38.792 18:19:50 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:12:38.792 18:19:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:12:38.793 18:19:50 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:12:38.793 18:19:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:38.793 18:19:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:38.793 18:19:50 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:38.793 ************************************ 00:12:38.793 START TEST dd_invalid_count 00:12:38.793 ************************************ 00:12:38.793 18:19:50 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1123 -- # invalid_count 00:12:38.793 18:19:50 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:12:38.793 18:19:50 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@648 -- # local es=0 00:12:38.793 18:19:50 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:12:38.793 18:19:50 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:38.793 18:19:50 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:38.793 18:19:50 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:38.793 18:19:50 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:38.793 18:19:50 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:38.793 18:19:50 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:38.793 18:19:50 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:38.793 18:19:50 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:38.793 18:19:50 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:12:39.053 [2024-07-22 18:19:50.898832] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:12:39.053 18:19:50 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # es=22 00:12:39.053 18:19:50 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:39.053 18:19:50 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:39.053 18:19:50 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:39.053 00:12:39.053 real 0m0.174s 00:12:39.053 user 0m0.094s 00:12:39.053 sys 0m0.078s 00:12:39.053 18:19:50 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:39.053 ************************************ 00:12:39.053 END TEST dd_invalid_count 00:12:39.053 ************************************ 00:12:39.053 18:19:50 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:12:39.053 18:19:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:12:39.053 18:19:51 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:12:39.053 18:19:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:39.053 18:19:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:39.053 18:19:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:39.053 ************************************ 00:12:39.053 START TEST dd_invalid_oflag 00:12:39.053 ************************************ 00:12:39.053 18:19:51 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1123 -- # invalid_oflag 00:12:39.053 18:19:51 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:12:39.053 18:19:51 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@648 -- # local es=0 00:12:39.053 18:19:51 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:12:39.053 18:19:51 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:39.053 18:19:51 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:39.053 18:19:51 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:39.053 18:19:51 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:39.053 18:19:51 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:39.053 18:19:51 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:39.053 18:19:51 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:39.053 18:19:51 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:39.053 18:19:51 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:12:39.318 [2024-07-22 18:19:51.123528] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:12:39.318 18:19:51 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # es=22 00:12:39.318 18:19:51 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:39.318 18:19:51 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:39.318 18:19:51 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:39.318 00:12:39.318 real 0m0.168s 00:12:39.318 user 0m0.099s 00:12:39.318 sys 0m0.068s 00:12:39.318 ************************************ 00:12:39.318 END TEST dd_invalid_oflag 00:12:39.318 ************************************ 00:12:39.318 18:19:51 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:39.318 18:19:51 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:12:39.318 18:19:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:12:39.318 18:19:51 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:12:39.318 18:19:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:39.318 18:19:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:39.318 18:19:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:39.318 ************************************ 00:12:39.318 START TEST dd_invalid_iflag 00:12:39.318 ************************************ 00:12:39.318 18:19:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1123 -- # invalid_iflag 00:12:39.318 18:19:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:12:39.318 18:19:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@648 -- # local es=0 00:12:39.318 18:19:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:12:39.318 18:19:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:39.318 18:19:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:39.318 18:19:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:39.318 18:19:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:39.318 18:19:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:39.318 18:19:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:39.318 18:19:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:39.318 18:19:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:39.318 18:19:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:12:39.583 [2024-07-22 18:19:51.337705] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:12:39.583 18:19:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # es=22 00:12:39.583 18:19:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:39.583 18:19:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:39.583 18:19:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:39.583 00:12:39.583 real 0m0.164s 00:12:39.583 user 0m0.084s 00:12:39.583 sys 0m0.077s 00:12:39.583 18:19:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:39.583 ************************************ 00:12:39.583 END TEST dd_invalid_iflag 00:12:39.583 ************************************ 00:12:39.583 18:19:51 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:12:39.583 18:19:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:12:39.583 18:19:51 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:12:39.583 18:19:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:39.583 18:19:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:39.583 18:19:51 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:39.583 ************************************ 00:12:39.583 START TEST dd_unknown_flag 00:12:39.583 ************************************ 00:12:39.583 18:19:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1123 -- # unknown_flag 00:12:39.583 18:19:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:12:39.583 18:19:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@648 -- # local es=0 00:12:39.583 18:19:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:12:39.583 18:19:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:39.583 18:19:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:39.583 18:19:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:39.583 18:19:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:39.583 18:19:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:39.583 18:19:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:39.583 18:19:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:39.583 18:19:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:39.583 18:19:51 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:12:39.583 [2024-07-22 18:19:51.557632] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:12:39.583 [2024-07-22 18:19:51.557813] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68050 ] 00:12:39.842 [2024-07-22 18:19:51.737061] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.101 [2024-07-22 18:19:52.020316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:40.359 [2024-07-22 18:19:52.231746] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:40.359 [2024-07-22 18:19:52.343581] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:12:40.359 [2024-07-22 18:19:52.343658] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:40.359 [2024-07-22 18:19:52.343740] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:12:40.359 [2024-07-22 18:19:52.343761] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:40.359 [2024-07-22 18:19:52.344027] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:12:40.359 [2024-07-22 18:19:52.344051] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:40.359 [2024-07-22 18:19:52.344128] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:12:40.359 [2024-07-22 18:19:52.344145] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:12:41.296 [2024-07-22 18:19:53.081409] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:12:41.553 18:19:53 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # es=234 00:12:41.553 18:19:53 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:41.553 18:19:53 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@660 -- # es=106 00:12:41.553 18:19:53 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # case "$es" in 00:12:41.553 18:19:53 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@668 -- # es=1 00:12:41.553 18:19:53 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:41.553 00:12:41.553 real 0m2.076s 00:12:41.553 user 0m1.680s 00:12:41.553 sys 0m0.286s 00:12:41.553 18:19:53 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:41.553 ************************************ 00:12:41.553 END TEST dd_unknown_flag 00:12:41.553 ************************************ 00:12:41.553 18:19:53 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:12:41.553 18:19:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:12:41.553 18:19:53 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:12:41.553 18:19:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:41.553 18:19:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:41.553 18:19:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:41.811 ************************************ 00:12:41.811 START TEST dd_invalid_json 00:12:41.811 ************************************ 00:12:41.811 18:19:53 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1123 -- # invalid_json 00:12:41.811 18:19:53 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:12:41.811 18:19:53 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # : 00:12:41.811 18:19:53 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@648 -- # local es=0 00:12:41.811 18:19:53 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:12:41.811 18:19:53 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:41.811 18:19:53 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:41.811 18:19:53 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:41.811 18:19:53 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:41.811 18:19:53 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:41.811 18:19:53 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:41.811 18:19:53 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:41.812 18:19:53 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:41.812 18:19:53 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:12:41.812 [2024-07-22 18:19:53.691241] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:12:41.812 [2024-07-22 18:19:53.691426] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68095 ] 00:12:42.070 [2024-07-22 18:19:53.865090] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.330 [2024-07-22 18:19:54.107108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.330 [2024-07-22 18:19:54.107261] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:12:42.330 [2024-07-22 18:19:54.107290] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:12:42.330 [2024-07-22 18:19:54.107306] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:42.330 [2024-07-22 18:19:54.107395] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:12:42.588 18:19:54 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # es=234 00:12:42.588 18:19:54 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:42.588 18:19:54 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@660 -- # es=106 00:12:42.588 18:19:54 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # case "$es" in 00:12:42.588 18:19:54 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@668 -- # es=1 00:12:42.588 18:19:54 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:42.588 00:12:42.588 real 0m0.968s 00:12:42.588 user 0m0.699s 00:12:42.588 sys 0m0.163s 00:12:42.588 18:19:54 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:42.588 ************************************ 00:12:42.588 END TEST dd_invalid_json 00:12:42.588 ************************************ 00:12:42.588 18:19:54 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:12:42.588 18:19:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:12:42.588 00:12:42.588 real 0m7.610s 00:12:42.588 user 0m5.173s 00:12:42.588 sys 0m2.043s 00:12:42.588 18:19:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:42.588 ************************************ 00:12:42.588 END TEST spdk_dd_negative 00:12:42.588 ************************************ 00:12:42.588 18:19:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:42.847 18:19:54 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:12:42.847 00:12:42.847 real 3m26.119s 00:12:42.847 user 2m47.821s 00:12:42.847 sys 1m11.918s 00:12:42.847 18:19:54 spdk_dd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:42.847 18:19:54 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:12:42.847 ************************************ 00:12:42.847 END TEST spdk_dd 00:12:42.847 ************************************ 00:12:42.847 18:19:54 -- common/autotest_common.sh@1142 -- # return 0 00:12:42.847 18:19:54 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:12:42.847 18:19:54 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:12:42.847 18:19:54 -- spdk/autotest.sh@260 -- # timing_exit lib 00:12:42.847 18:19:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:42.847 18:19:54 -- common/autotest_common.sh@10 -- # set +x 00:12:42.847 18:19:54 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:12:42.847 18:19:54 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:12:42.847 18:19:54 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:12:42.847 18:19:54 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:12:42.847 18:19:54 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:12:42.847 18:19:54 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:12:42.847 18:19:54 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:12:42.847 18:19:54 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:42.847 18:19:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:42.847 18:19:54 -- common/autotest_common.sh@10 -- # set +x 00:12:42.847 ************************************ 00:12:42.847 START TEST nvmf_tcp 00:12:42.847 ************************************ 00:12:42.847 18:19:54 nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:12:42.847 * Looking for test storage... 00:12:42.847 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:12:42.847 18:19:54 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:12:42.847 18:19:54 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:12:42.847 18:19:54 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:12:42.847 18:19:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:42.847 18:19:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:42.847 18:19:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:42.847 ************************************ 00:12:42.847 START TEST nvmf_target_core 00:12:42.847 ************************************ 00:12:42.847 18:19:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:12:43.106 * Looking for test storage... 00:12:43.106 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:12:43.106 18:19:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:12:43.106 18:19:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:12:43.106 18:19:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:43.106 18:19:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:12:43.106 18:19:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:43.106 18:19:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:43.106 18:19:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:43.106 18:19:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:43.106 18:19:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:43.106 18:19:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:43.106 18:19:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:43.106 18:19:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:43.106 18:19:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:43.106 18:19:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:43.106 18:19:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:12:43.106 18:19:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=1e224894-a0fc-4112-b81b-a37606f50c96 00:12:43.106 18:19:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:43.106 18:19:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:43.106 18:19:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:43.106 18:19:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:43.106 18:19:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:43.106 18:19:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:43.106 18:19:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:43.106 18:19:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:43.106 18:19:54 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.106 18:19:54 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.106 18:19:54 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.106 18:19:54 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:12:43.106 18:19:54 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.106 18:19:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:12:43.106 18:19:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:43.106 18:19:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:43.106 18:19:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:43.106 18:19:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:43.106 18:19:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:43.106 18:19:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:43.106 18:19:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:43.106 18:19:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:43.106 18:19:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:12:43.106 18:19:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:12:43.106 18:19:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:12:43.106 18:19:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:43.106 18:19:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:43.106 18:19:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:43.106 18:19:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:43.106 ************************************ 00:12:43.106 START TEST nvmf_host_management 00:12:43.106 ************************************ 00:12:43.106 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:43.106 * Looking for test storage... 00:12:43.106 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:43.106 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:43.106 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:12:43.106 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:43.106 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:43.106 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:43.106 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:43.106 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:43.106 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:43.106 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:43.106 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=1e224894-a0fc-4112-b81b-a37606f50c96 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:43.107 Cannot find device "nvmf_init_br" 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # true 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:43.107 Cannot find device "nvmf_tgt_br" 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:43.107 Cannot find device "nvmf_tgt_br2" 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:43.107 Cannot find device "nvmf_init_br" 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # true 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:43.107 Cannot find device "nvmf_tgt_br" 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:43.107 Cannot find device "nvmf_tgt_br2" 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:43.107 Cannot find device "nvmf_br" 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # true 00:12:43.107 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:43.366 Cannot find device "nvmf_init_if" 00:12:43.366 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@161 -- # true 00:12:43.366 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:43.366 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:43.366 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:12:43.366 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:43.366 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:43.366 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:12:43.366 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:43.366 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:43.366 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:43.366 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:43.366 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:43.366 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:43.366 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:43.366 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:43.366 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:43.366 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:43.366 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:43.366 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:43.366 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:43.366 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:43.366 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:43.366 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:43.366 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:43.366 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:43.366 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:43.366 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:43.366 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:43.366 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:43.366 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:43.366 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:43.366 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:43.366 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:12:43.366 00:12:43.366 --- 10.0.0.2 ping statistics --- 00:12:43.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:43.366 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:12:43.625 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:43.625 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:43.625 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:12:43.625 00:12:43.625 --- 10.0.0.3 ping statistics --- 00:12:43.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:43.625 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:12:43.625 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:43.625 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:43.625 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:12:43.625 00:12:43.625 --- 10.0.0.1 ping statistics --- 00:12:43.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:43.625 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:12:43.625 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:43.625 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:12:43.625 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:43.625 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:43.625 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:43.625 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:43.625 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:43.625 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:43.625 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:43.625 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:12:43.625 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:12:43.625 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:12:43.625 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:43.625 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:43.625 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:43.625 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=68379 00:12:43.625 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:12:43.625 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 68379 00:12:43.625 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 68379 ']' 00:12:43.625 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:43.626 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:43.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:43.626 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:43.626 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:43.626 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:43.626 [2024-07-22 18:19:55.546572] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:12:43.626 [2024-07-22 18:19:55.546768] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:43.885 [2024-07-22 18:19:55.726628] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:44.144 [2024-07-22 18:19:56.002324] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:44.144 [2024-07-22 18:19:56.002387] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:44.144 [2024-07-22 18:19:56.002403] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:44.144 [2024-07-22 18:19:56.002420] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:44.144 [2024-07-22 18:19:56.002435] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:44.144 [2024-07-22 18:19:56.002684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:44.144 [2024-07-22 18:19:56.003337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:44.144 [2024-07-22 18:19:56.003474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:44.144 [2024-07-22 18:19:56.003533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:44.403 [2024-07-22 18:19:56.207958] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:44.662 18:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:44.662 18:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:12:44.662 18:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:44.662 18:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:44.662 18:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:44.662 18:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:44.662 18:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:44.662 18:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.662 18:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:44.662 [2024-07-22 18:19:56.552570] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:44.662 18:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.662 18:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:12:44.662 18:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:44.662 18:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:44.662 18:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:12:44.662 18:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:12:44.662 18:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:12:44.662 18:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.662 18:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:44.662 Malloc0 00:12:44.662 [2024-07-22 18:19:56.669429] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:44.920 18:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.920 18:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:12:44.920 18:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:44.920 18:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:44.920 18:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=68439 00:12:44.920 18:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 68439 /var/tmp/bdevperf.sock 00:12:44.920 18:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:12:44.920 18:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 68439 ']' 00:12:44.920 18:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:12:44.920 18:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:44.920 18:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:44.920 18:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:12:44.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:44.920 18:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:44.920 18:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:12:44.920 18:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:44.920 18:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:44.920 18:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:44.920 18:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:44.920 { 00:12:44.920 "params": { 00:12:44.920 "name": "Nvme$subsystem", 00:12:44.920 "trtype": "$TEST_TRANSPORT", 00:12:44.920 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:44.920 "adrfam": "ipv4", 00:12:44.920 "trsvcid": "$NVMF_PORT", 00:12:44.920 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:44.920 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:44.920 "hdgst": ${hdgst:-false}, 00:12:44.920 "ddgst": ${ddgst:-false} 00:12:44.920 }, 00:12:44.920 "method": "bdev_nvme_attach_controller" 00:12:44.920 } 00:12:44.920 EOF 00:12:44.920 )") 00:12:44.920 18:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:12:44.920 18:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:12:44.920 18:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:12:44.920 18:19:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:44.920 "params": { 00:12:44.920 "name": "Nvme0", 00:12:44.920 "trtype": "tcp", 00:12:44.920 "traddr": "10.0.0.2", 00:12:44.920 "adrfam": "ipv4", 00:12:44.920 "trsvcid": "4420", 00:12:44.920 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:44.920 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:44.920 "hdgst": false, 00:12:44.920 "ddgst": false 00:12:44.920 }, 00:12:44.920 "method": "bdev_nvme_attach_controller" 00:12:44.920 }' 00:12:44.920 [2024-07-22 18:19:56.838492] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:12:44.920 [2024-07-22 18:19:56.838682] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68439 ] 00:12:45.179 [2024-07-22 18:19:57.021431] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.437 [2024-07-22 18:19:57.260611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.695 [2024-07-22 18:19:57.471432] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:45.695 Running I/O for 10 seconds... 00:12:45.955 18:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:45.955 18:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:12:45.955 18:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:12:45.955 18:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.955 18:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:45.955 18:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.955 18:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:45.955 18:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:12:45.955 18:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:12:45.955 18:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:12:45.955 18:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:12:45.955 18:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:12:45.955 18:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:12:45.955 18:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:12:45.955 18:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:12:45.955 18:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.955 18:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:45.955 18:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:12:45.955 18:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.955 18:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=131 00:12:45.955 18:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 131 -ge 100 ']' 00:12:45.955 18:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:12:45.955 18:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:12:45.955 18:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:12:45.955 18:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:45.955 18:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.955 18:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:45.955 18:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.955 18:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:45.955 18:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.955 18:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:45.955 18:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.955 18:19:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:12:45.955 [2024-07-22 18:19:57.892648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.955 [2024-07-22 18:19:57.892727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.955 [2024-07-22 18:19:57.892771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.955 [2024-07-22 18:19:57.892789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.955 [2024-07-22 18:19:57.892808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.955 [2024-07-22 18:19:57.892822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.955 [2024-07-22 18:19:57.892840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.955 [2024-07-22 18:19:57.892855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.955 [2024-07-22 18:19:57.892872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.955 [2024-07-22 18:19:57.892887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.955 [2024-07-22 18:19:57.892904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.955 [2024-07-22 18:19:57.892919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.955 [2024-07-22 18:19:57.892936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.955 [2024-07-22 18:19:57.892951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.956 [2024-07-22 18:19:57.892969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.956 [2024-07-22 18:19:57.892983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.956 [2024-07-22 18:19:57.893000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.956 [2024-07-22 18:19:57.893020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.956 [2024-07-22 18:19:57.893038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.956 [2024-07-22 18:19:57.893052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.956 [2024-07-22 18:19:57.893070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.956 [2024-07-22 18:19:57.893084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.956 [2024-07-22 18:19:57.893101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.956 [2024-07-22 18:19:57.893116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.956 [2024-07-22 18:19:57.893133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.956 [2024-07-22 18:19:57.893148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.956 [2024-07-22 18:19:57.893165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.956 [2024-07-22 18:19:57.893180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.956 [2024-07-22 18:19:57.893196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.956 [2024-07-22 18:19:57.893225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.956 [2024-07-22 18:19:57.893245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.956 [2024-07-22 18:19:57.893268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.956 [2024-07-22 18:19:57.893286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.956 [2024-07-22 18:19:57.893301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.956 [2024-07-22 18:19:57.893319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.956 [2024-07-22 18:19:57.893334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.956 [2024-07-22 18:19:57.893351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.956 [2024-07-22 18:19:57.893365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.956 [2024-07-22 18:19:57.893382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.956 [2024-07-22 18:19:57.893396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.956 [2024-07-22 18:19:57.893413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.956 [2024-07-22 18:19:57.893427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.956 [2024-07-22 18:19:57.893444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.956 [2024-07-22 18:19:57.893458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.956 [2024-07-22 18:19:57.893476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.956 [2024-07-22 18:19:57.893490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.956 [2024-07-22 18:19:57.893507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.956 [2024-07-22 18:19:57.893521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.956 [2024-07-22 18:19:57.893550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.956 [2024-07-22 18:19:57.893565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.956 [2024-07-22 18:19:57.893592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.956 [2024-07-22 18:19:57.893606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.956 [2024-07-22 18:19:57.893623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.956 [2024-07-22 18:19:57.893638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.956 [2024-07-22 18:19:57.893655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.956 [2024-07-22 18:19:57.893669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.956 [2024-07-22 18:19:57.893686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.956 [2024-07-22 18:19:57.893700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.956 [2024-07-22 18:19:57.893717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.956 [2024-07-22 18:19:57.893731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.956 [2024-07-22 18:19:57.893748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.956 [2024-07-22 18:19:57.893763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.956 [2024-07-22 18:19:57.893779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.956 [2024-07-22 18:19:57.893799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.956 [2024-07-22 18:19:57.893816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.956 [2024-07-22 18:19:57.893831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.956 [2024-07-22 18:19:57.893848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.956 [2024-07-22 18:19:57.893863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.956 [2024-07-22 18:19:57.893879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.956 [2024-07-22 18:19:57.893893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.956 [2024-07-22 18:19:57.893910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.956 [2024-07-22 18:19:57.893925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.956 [2024-07-22 18:19:57.893941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.956 [2024-07-22 18:19:57.893955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.956 [2024-07-22 18:19:57.893972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.956 [2024-07-22 18:19:57.893987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.956 [2024-07-22 18:19:57.894004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.956 [2024-07-22 18:19:57.894018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.956 [2024-07-22 18:19:57.894035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.956 [2024-07-22 18:19:57.894049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.956 [2024-07-22 18:19:57.894066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.956 [2024-07-22 18:19:57.894080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.956 [2024-07-22 18:19:57.894116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.956 [2024-07-22 18:19:57.894131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.956 [2024-07-22 18:19:57.894148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.956 [2024-07-22 18:19:57.894162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.956 [2024-07-22 18:19:57.894180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.956 [2024-07-22 18:19:57.894194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.956 [2024-07-22 18:19:57.894223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.956 [2024-07-22 18:19:57.894240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.956 [2024-07-22 18:19:57.894257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.956 [2024-07-22 18:19:57.894272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.956 [2024-07-22 18:19:57.894289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.957 [2024-07-22 18:19:57.894303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.957 [2024-07-22 18:19:57.894320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.957 [2024-07-22 18:19:57.894340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.957 [2024-07-22 18:19:57.894357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.957 [2024-07-22 18:19:57.894372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.957 [2024-07-22 18:19:57.894389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.957 [2024-07-22 18:19:57.894403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.957 [2024-07-22 18:19:57.894419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.957 [2024-07-22 18:19:57.894433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.957 [2024-07-22 18:19:57.894450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.957 [2024-07-22 18:19:57.894464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.957 [2024-07-22 18:19:57.894481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.957 [2024-07-22 18:19:57.894496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.957 [2024-07-22 18:19:57.894512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.957 [2024-07-22 18:19:57.894527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.957 [2024-07-22 18:19:57.894543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.957 [2024-07-22 18:19:57.894558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.957 [2024-07-22 18:19:57.894575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.957 [2024-07-22 18:19:57.894589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.957 [2024-07-22 18:19:57.894606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.957 [2024-07-22 18:19:57.894620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.957 [2024-07-22 18:19:57.894638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.957 [2024-07-22 18:19:57.894652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.957 [2024-07-22 18:19:57.894669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.957 [2024-07-22 18:19:57.894683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.957 [2024-07-22 18:19:57.894701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.957 [2024-07-22 18:19:57.894715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.957 [2024-07-22 18:19:57.894732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.957 [2024-07-22 18:19:57.894746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.957 [2024-07-22 18:19:57.894763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.957 [2024-07-22 18:19:57.894778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.957 [2024-07-22 18:19:57.894795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.957 [2024-07-22 18:19:57.894809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.957 [2024-07-22 18:19:57.894826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:45.957 [2024-07-22 18:19:57.894855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.957 [2024-07-22 18:19:57.894879] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b500 is same with the state(5) to be set 00:12:45.957 [2024-07-22 18:19:57.895161] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b500 was disconnected and freed. reset controller. 00:12:45.957 [2024-07-22 18:19:57.895342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:12:45.957 [2024-07-22 18:19:57.895378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.957 [2024-07-22 18:19:57.895398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:12:45.957 [2024-07-22 18:19:57.895412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.957 [2024-07-22 18:19:57.895427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:12:45.957 [2024-07-22 18:19:57.895441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.957 [2024-07-22 18:19:57.895456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:12:45.957 [2024-07-22 18:19:57.895470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.957 [2024-07-22 18:19:57.895485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(5) to be set 00:12:45.957 [2024-07-22 18:19:57.896691] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:12:45.957 task offset: 32768 on job bdev=Nvme0n1 fails 00:12:45.957 00:12:45.957 Latency(us) 00:12:45.957 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:45.957 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:45.957 Job: Nvme0n1 ended in about 0.22 seconds with error 00:12:45.957 Verification LBA range: start 0x0 length 0x400 00:12:45.957 Nvme0n1 : 0.22 1138.01 71.13 284.50 0.00 42520.06 3425.75 45279.42 00:12:45.957 =================================================================================================================== 00:12:45.957 Total : 1138.01 71.13 284.50 0.00 42520.06 3425.75 45279.42 00:12:45.957 [2024-07-22 18:19:57.901850] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:45.957 [2024-07-22 18:19:57.901903] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:12:45.957 [2024-07-22 18:19:57.913274] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:46.891 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 68439 00:12:46.891 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (68439) - No such process 00:12:46.891 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:12:46.891 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:12:46.891 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:12:46.891 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:12:46.891 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:12:46.891 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:12:46.891 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:46.891 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:46.891 { 00:12:46.891 "params": { 00:12:46.891 "name": "Nvme$subsystem", 00:12:46.891 "trtype": "$TEST_TRANSPORT", 00:12:46.891 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:46.891 "adrfam": "ipv4", 00:12:46.891 "trsvcid": "$NVMF_PORT", 00:12:46.891 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:46.891 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:46.891 "hdgst": ${hdgst:-false}, 00:12:46.891 "ddgst": ${ddgst:-false} 00:12:46.891 }, 00:12:46.891 "method": "bdev_nvme_attach_controller" 00:12:46.891 } 00:12:46.891 EOF 00:12:46.891 )") 00:12:46.891 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:12:46.891 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:12:46.891 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:12:46.891 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:46.891 "params": { 00:12:46.891 "name": "Nvme0", 00:12:46.891 "trtype": "tcp", 00:12:46.891 "traddr": "10.0.0.2", 00:12:46.891 "adrfam": "ipv4", 00:12:46.891 "trsvcid": "4420", 00:12:46.891 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:46.891 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:46.891 "hdgst": false, 00:12:46.891 "ddgst": false 00:12:46.891 }, 00:12:46.891 "method": "bdev_nvme_attach_controller" 00:12:46.891 }' 00:12:47.150 [2024-07-22 18:19:59.001407] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:12:47.150 [2024-07-22 18:19:59.001964] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68483 ] 00:12:47.408 [2024-07-22 18:19:59.169134] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:47.408 [2024-07-22 18:19:59.406671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.666 [2024-07-22 18:19:59.617987] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:47.924 Running I/O for 1 seconds... 00:12:48.858 00:12:48.858 Latency(us) 00:12:48.858 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:48.858 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:48.858 Verification LBA range: start 0x0 length 0x400 00:12:48.858 Nvme0n1 : 1.00 1340.21 83.76 0.00 0.00 46842.76 6136.55 43611.23 00:12:48.858 =================================================================================================================== 00:12:48.858 Total : 1340.21 83.76 0.00 0.00 46842.76 6136.55 43611.23 00:12:50.234 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:12:50.234 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:12:50.234 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:12:50.234 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:12:50.234 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:12:50.234 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:50.234 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:12:50.234 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:50.234 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:12:50.234 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:50.234 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:50.234 rmmod nvme_tcp 00:12:50.234 rmmod nvme_fabrics 00:12:50.234 rmmod nvme_keyring 00:12:50.234 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:50.234 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:12:50.234 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:12:50.234 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 68379 ']' 00:12:50.234 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 68379 00:12:50.234 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 68379 ']' 00:12:50.234 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 68379 00:12:50.234 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:12:50.234 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:50.234 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68379 00:12:50.234 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:50.234 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:50.234 killing process with pid 68379 00:12:50.234 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68379' 00:12:50.234 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 68379 00:12:50.234 18:20:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 68379 00:12:51.611 [2024-07-22 18:20:03.470633] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:12:51.611 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:51.611 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:51.611 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:51.611 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:51.611 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:51.611 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:51.611 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:51.611 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:51.611 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:51.611 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:12:51.611 00:12:51.611 real 0m8.661s 00:12:51.611 user 0m33.795s 00:12:51.611 sys 0m1.737s 00:12:51.611 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:51.611 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:51.611 ************************************ 00:12:51.611 END TEST nvmf_host_management 00:12:51.611 ************************************ 00:12:51.871 18:20:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:12:51.871 18:20:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:12:51.871 18:20:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:51.871 18:20:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:51.871 18:20:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:51.871 ************************************ 00:12:51.871 START TEST nvmf_lvol 00:12:51.871 ************************************ 00:12:51.871 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:12:51.871 * Looking for test storage... 00:12:51.871 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:51.871 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:51.871 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:12:51.871 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:51.871 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:51.871 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:51.871 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:51.871 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:51.871 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:51.871 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:51.871 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:51.871 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:51.871 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:51.871 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:12:51.871 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=1e224894-a0fc-4112-b81b-a37606f50c96 00:12:51.871 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:51.871 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:51.871 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:51.871 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:51.871 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:51.871 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:51.871 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:51.871 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:51.871 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.871 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.871 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.871 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:12:51.871 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.871 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:12:51.871 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:51.871 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:51.871 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:51.871 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:51.871 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:51.871 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:51.871 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:51.871 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:51.871 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:51.872 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:51.872 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:12:51.872 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:12:51.872 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:51.872 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:12:51.872 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:51.872 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:51.872 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:51.872 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:51.872 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:51.872 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:51.872 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:51.872 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:51.872 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:51.872 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:51.872 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:51.872 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:51.872 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:51.872 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:51.872 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:51.872 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:51.872 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:51.872 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:51.872 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:51.872 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:51.872 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:51.872 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:51.872 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:51.872 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:51.872 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:51.872 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:51.872 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:51.872 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:51.872 Cannot find device "nvmf_tgt_br" 00:12:51.872 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:12:51.872 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:51.872 Cannot find device "nvmf_tgt_br2" 00:12:51.872 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:12:51.872 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:51.872 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:51.872 Cannot find device "nvmf_tgt_br" 00:12:51.872 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:12:51.872 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:51.872 Cannot find device "nvmf_tgt_br2" 00:12:51.872 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:12:51.872 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:51.872 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:51.872 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:51.872 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:51.872 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:12:51.872 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:52.131 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:52.131 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:12:52.131 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:52.131 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:52.131 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:52.131 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:52.131 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:52.131 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:52.131 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:52.131 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:52.131 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:52.131 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:52.131 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:52.131 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:52.131 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:52.131 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:52.131 18:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:52.131 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:52.131 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:52.131 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:52.131 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:52.131 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:52.131 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:52.131 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:52.131 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:52.131 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:52.131 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:52.131 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.113 ms 00:12:52.131 00:12:52.131 --- 10.0.0.2 ping statistics --- 00:12:52.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.131 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:12:52.131 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:52.131 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:52.131 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:12:52.131 00:12:52.131 --- 10.0.0.3 ping statistics --- 00:12:52.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.131 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:12:52.131 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:52.131 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:52.131 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:12:52.131 00:12:52.131 --- 10.0.0.1 ping statistics --- 00:12:52.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.131 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:12:52.131 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:52.131 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:12:52.131 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:52.131 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:52.131 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:52.131 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:52.131 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:52.131 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:52.131 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:52.131 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:12:52.131 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:52.131 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:52.131 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:52.131 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=68729 00:12:52.131 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:52.131 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 68729 00:12:52.131 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 68729 ']' 00:12:52.132 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.132 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:52.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.132 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.132 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:52.132 18:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:52.391 [2024-07-22 18:20:04.218882] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:12:52.391 [2024-07-22 18:20:04.219031] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:52.391 [2024-07-22 18:20:04.387382] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:52.649 [2024-07-22 18:20:04.647676] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:52.649 [2024-07-22 18:20:04.647744] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:52.649 [2024-07-22 18:20:04.647762] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:52.650 [2024-07-22 18:20:04.647778] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:52.650 [2024-07-22 18:20:04.647790] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:52.650 [2024-07-22 18:20:04.647988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:52.650 [2024-07-22 18:20:04.648123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.650 [2024-07-22 18:20:04.648139] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:52.908 [2024-07-22 18:20:04.853013] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:53.167 18:20:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:53.167 18:20:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:12:53.167 18:20:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:53.167 18:20:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:53.167 18:20:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:53.167 18:20:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:53.167 18:20:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:53.426 [2024-07-22 18:20:05.435097] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:53.684 18:20:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:53.943 18:20:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:12:53.943 18:20:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:54.201 18:20:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:12:54.201 18:20:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:12:54.460 18:20:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:12:54.718 18:20:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=e38dde19-80ae-473b-bd14-8d5701281358 00:12:54.718 18:20:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e38dde19-80ae-473b-bd14-8d5701281358 lvol 20 00:12:54.976 18:20:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=41c16965-bffb-4d9d-92d1-10aab7c50e91 00:12:54.976 18:20:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:55.267 18:20:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 41c16965-bffb-4d9d-92d1-10aab7c50e91 00:12:55.533 18:20:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:55.791 [2024-07-22 18:20:07.603504] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:55.791 18:20:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:56.049 18:20:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:12:56.049 18:20:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=68799 00:12:56.049 18:20:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:12:56.983 18:20:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 41c16965-bffb-4d9d-92d1-10aab7c50e91 MY_SNAPSHOT 00:12:57.242 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=33488f9e-c66e-456d-a6fc-1f76aca646a4 00:12:57.242 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 41c16965-bffb-4d9d-92d1-10aab7c50e91 30 00:12:57.500 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 33488f9e-c66e-456d-a6fc-1f76aca646a4 MY_CLONE 00:12:57.759 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=485a3a24-1981-4b90-afa2-d13867209f20 00:12:57.759 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 485a3a24-1981-4b90-afa2-d13867209f20 00:12:58.327 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 68799 00:13:06.500 Initializing NVMe Controllers 00:13:06.500 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:06.500 Controller IO queue size 128, less than required. 00:13:06.500 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:06.500 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:13:06.500 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:13:06.500 Initialization complete. Launching workers. 00:13:06.500 ======================================================== 00:13:06.500 Latency(us) 00:13:06.500 Device Information : IOPS MiB/s Average min max 00:13:06.500 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 8553.30 33.41 14968.16 313.11 175863.06 00:13:06.500 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8407.70 32.84 15222.75 6171.47 136815.87 00:13:06.500 ======================================================== 00:13:06.500 Total : 16961.00 66.25 15094.36 313.11 175863.06 00:13:06.500 00:13:06.500 18:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:06.758 18:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 41c16965-bffb-4d9d-92d1-10aab7c50e91 00:13:07.016 18:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e38dde19-80ae-473b-bd14-8d5701281358 00:13:07.016 18:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:13:07.016 18:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:13:07.016 18:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:13:07.016 18:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:07.016 18:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:13:07.275 18:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:07.275 18:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:13:07.275 18:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:07.275 18:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:07.275 rmmod nvme_tcp 00:13:07.275 rmmod nvme_fabrics 00:13:07.275 rmmod nvme_keyring 00:13:07.275 18:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:07.275 18:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:13:07.275 18:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:13:07.275 18:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 68729 ']' 00:13:07.275 18:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 68729 00:13:07.275 18:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 68729 ']' 00:13:07.275 18:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 68729 00:13:07.275 18:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:13:07.275 18:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:07.275 18:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68729 00:13:07.275 18:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:07.275 killing process with pid 68729 00:13:07.275 18:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:07.275 18:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68729' 00:13:07.275 18:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 68729 00:13:07.275 18:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 68729 00:13:08.653 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:08.653 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:08.653 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:08.653 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:08.653 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:08.653 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:08.653 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:08.653 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:08.913 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:08.913 ************************************ 00:13:08.913 END TEST nvmf_lvol 00:13:08.913 ************************************ 00:13:08.913 00:13:08.913 real 0m17.058s 00:13:08.913 user 1m7.986s 00:13:08.913 sys 0m4.147s 00:13:08.913 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:08.913 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:08.913 18:20:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:13:08.913 18:20:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:08.913 18:20:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:08.913 18:20:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:08.913 18:20:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:08.913 ************************************ 00:13:08.913 START TEST nvmf_lvs_grow 00:13:08.913 ************************************ 00:13:08.913 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:08.913 * Looking for test storage... 00:13:08.913 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:08.913 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:08.913 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:13:08.913 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:08.913 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:08.913 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:08.913 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:08.913 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:08.913 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:08.913 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:08.913 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:08.913 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:08.913 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:08.913 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:13:08.913 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=1e224894-a0fc-4112-b81b-a37606f50c96 00:13:08.913 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:08.913 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:08.913 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:08.913 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:08.913 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:08.913 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:08.913 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:08.913 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:08.913 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.913 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.913 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.913 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:13:08.913 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.913 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:13:08.913 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:08.914 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:08.914 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:08.914 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:08.914 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:08.914 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:08.914 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:08.914 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:08.914 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:08.914 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:08.914 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:13:08.914 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:08.914 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:08.914 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:08.914 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:08.914 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:08.914 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:08.914 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:08.914 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:08.914 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:08.914 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:08.914 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:08.914 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:08.914 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:08.914 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:08.914 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:08.914 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:08.914 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:08.914 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:08.914 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:08.914 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:08.914 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:08.914 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:08.914 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:08.914 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:08.914 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:08.914 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:08.914 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:08.914 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:08.914 Cannot find device "nvmf_tgt_br" 00:13:08.914 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:13:08.914 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:08.914 Cannot find device "nvmf_tgt_br2" 00:13:08.914 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:13:08.914 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:08.914 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:08.914 Cannot find device "nvmf_tgt_br" 00:13:08.914 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:13:08.914 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:09.174 Cannot find device "nvmf_tgt_br2" 00:13:09.174 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:13:09.174 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:09.174 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:09.174 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:09.174 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:09.174 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:13:09.174 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:09.174 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:09.174 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:13:09.174 18:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:09.174 18:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:09.174 18:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:09.174 18:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:09.174 18:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:09.174 18:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:09.174 18:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:09.174 18:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:09.174 18:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:09.174 18:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:09.174 18:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:09.174 18:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:09.174 18:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:09.174 18:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:09.174 18:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:09.174 18:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:09.174 18:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:09.174 18:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:09.174 18:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:09.174 18:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:09.174 18:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:09.174 18:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:09.174 18:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:09.174 18:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:09.174 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:09.174 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:13:09.174 00:13:09.174 --- 10.0.0.2 ping statistics --- 00:13:09.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.174 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:13:09.174 18:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:09.434 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:09.434 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:13:09.434 00:13:09.434 --- 10.0.0.3 ping statistics --- 00:13:09.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.434 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:13:09.434 18:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:09.434 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:09.434 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:13:09.434 00:13:09.434 --- 10.0.0.1 ping statistics --- 00:13:09.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.434 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:13:09.434 18:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:09.434 18:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:13:09.434 18:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:09.434 18:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:09.434 18:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:09.434 18:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:09.434 18:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:09.435 18:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:09.435 18:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:09.435 18:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:13:09.435 18:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:09.435 18:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:09.435 18:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:09.435 18:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=69135 00:13:09.435 18:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:09.435 18:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 69135 00:13:09.435 18:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 69135 ']' 00:13:09.435 18:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.435 18:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:09.435 18:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.435 18:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:09.435 18:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:09.435 [2024-07-22 18:20:21.349932] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:13:09.435 [2024-07-22 18:20:21.350102] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:09.694 [2024-07-22 18:20:21.533126] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.954 [2024-07-22 18:20:21.829580] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:09.954 [2024-07-22 18:20:21.829652] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:09.954 [2024-07-22 18:20:21.829673] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:09.954 [2024-07-22 18:20:21.829692] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:09.954 [2024-07-22 18:20:21.829708] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:09.954 [2024-07-22 18:20:21.829769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.212 [2024-07-22 18:20:22.045744] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:10.470 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:10.470 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:13:10.470 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:10.470 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:10.470 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:10.470 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:10.471 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:10.729 [2024-07-22 18:20:22.530283] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:10.729 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:13:10.729 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:10.729 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:10.729 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:10.729 ************************************ 00:13:10.729 START TEST lvs_grow_clean 00:13:10.729 ************************************ 00:13:10.729 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:13:10.729 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:10.729 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:10.729 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:10.729 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:10.729 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:10.729 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:10.729 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:10.729 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:10.729 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:10.987 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:10.987 18:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:11.248 18:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=67d978ae-a93a-47fe-ae6c-a8d04d09580a 00:13:11.248 18:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67d978ae-a93a-47fe-ae6c-a8d04d09580a 00:13:11.248 18:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:11.516 18:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:11.516 18:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:11.516 18:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 67d978ae-a93a-47fe-ae6c-a8d04d09580a lvol 150 00:13:11.784 18:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=42647463-a374-4082-bd30-34b0fc77c457 00:13:11.784 18:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:11.784 18:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:12.042 [2024-07-22 18:20:23.817604] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:12.042 [2024-07-22 18:20:23.817718] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:12.042 true 00:13:12.042 18:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67d978ae-a93a-47fe-ae6c-a8d04d09580a 00:13:12.042 18:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:12.301 18:20:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:12.301 18:20:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:12.559 18:20:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 42647463-a374-4082-bd30-34b0fc77c457 00:13:12.817 18:20:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:13.076 [2024-07-22 18:20:24.874526] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:13.076 18:20:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:13.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:13.335 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=69223 00:13:13.335 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:13.335 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:13.335 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 69223 /var/tmp/bdevperf.sock 00:13:13.335 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 69223 ']' 00:13:13.335 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:13.335 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:13.335 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:13.335 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:13.335 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:13.335 [2024-07-22 18:20:25.257167] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:13:13.335 [2024-07-22 18:20:25.257331] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69223 ] 00:13:13.594 [2024-07-22 18:20:25.415328] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:13.907 [2024-07-22 18:20:25.656011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:13.907 [2024-07-22 18:20:25.857023] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:14.166 18:20:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:14.166 18:20:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:13:14.166 18:20:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:14.424 Nvme0n1 00:13:14.683 18:20:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:14.942 [ 00:13:14.942 { 00:13:14.942 "name": "Nvme0n1", 00:13:14.942 "aliases": [ 00:13:14.942 "42647463-a374-4082-bd30-34b0fc77c457" 00:13:14.942 ], 00:13:14.942 "product_name": "NVMe disk", 00:13:14.942 "block_size": 4096, 00:13:14.942 "num_blocks": 38912, 00:13:14.942 "uuid": "42647463-a374-4082-bd30-34b0fc77c457", 00:13:14.942 "assigned_rate_limits": { 00:13:14.942 "rw_ios_per_sec": 0, 00:13:14.942 "rw_mbytes_per_sec": 0, 00:13:14.942 "r_mbytes_per_sec": 0, 00:13:14.942 "w_mbytes_per_sec": 0 00:13:14.942 }, 00:13:14.942 "claimed": false, 00:13:14.942 "zoned": false, 00:13:14.942 "supported_io_types": { 00:13:14.942 "read": true, 00:13:14.942 "write": true, 00:13:14.942 "unmap": true, 00:13:14.942 "flush": true, 00:13:14.942 "reset": true, 00:13:14.942 "nvme_admin": true, 00:13:14.942 "nvme_io": true, 00:13:14.942 "nvme_io_md": false, 00:13:14.942 "write_zeroes": true, 00:13:14.942 "zcopy": false, 00:13:14.942 "get_zone_info": false, 00:13:14.942 "zone_management": false, 00:13:14.942 "zone_append": false, 00:13:14.942 "compare": true, 00:13:14.942 "compare_and_write": true, 00:13:14.942 "abort": true, 00:13:14.942 "seek_hole": false, 00:13:14.942 "seek_data": false, 00:13:14.942 "copy": true, 00:13:14.942 "nvme_iov_md": false 00:13:14.942 }, 00:13:14.942 "memory_domains": [ 00:13:14.942 { 00:13:14.942 "dma_device_id": "system", 00:13:14.942 "dma_device_type": 1 00:13:14.942 } 00:13:14.942 ], 00:13:14.942 "driver_specific": { 00:13:14.942 "nvme": [ 00:13:14.942 { 00:13:14.942 "trid": { 00:13:14.942 "trtype": "TCP", 00:13:14.942 "adrfam": "IPv4", 00:13:14.942 "traddr": "10.0.0.2", 00:13:14.942 "trsvcid": "4420", 00:13:14.942 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:14.942 }, 00:13:14.942 "ctrlr_data": { 00:13:14.942 "cntlid": 1, 00:13:14.942 "vendor_id": "0x8086", 00:13:14.942 "model_number": "SPDK bdev Controller", 00:13:14.942 "serial_number": "SPDK0", 00:13:14.942 "firmware_revision": "24.09", 00:13:14.942 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:14.942 "oacs": { 00:13:14.942 "security": 0, 00:13:14.942 "format": 0, 00:13:14.942 "firmware": 0, 00:13:14.942 "ns_manage": 0 00:13:14.942 }, 00:13:14.942 "multi_ctrlr": true, 00:13:14.942 "ana_reporting": false 00:13:14.942 }, 00:13:14.942 "vs": { 00:13:14.942 "nvme_version": "1.3" 00:13:14.942 }, 00:13:14.942 "ns_data": { 00:13:14.942 "id": 1, 00:13:14.942 "can_share": true 00:13:14.942 } 00:13:14.942 } 00:13:14.942 ], 00:13:14.942 "mp_policy": "active_passive" 00:13:14.942 } 00:13:14.942 } 00:13:14.942 ] 00:13:14.942 18:20:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=69247 00:13:14.942 18:20:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:14.942 18:20:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:14.942 Running I/O for 10 seconds... 00:13:15.878 Latency(us) 00:13:15.878 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:15.878 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:15.878 Nvme0n1 : 1.00 5842.00 22.82 0.00 0.00 0.00 0.00 0.00 00:13:15.878 =================================================================================================================== 00:13:15.878 Total : 5842.00 22.82 0.00 0.00 0.00 0.00 0.00 00:13:15.878 00:13:16.812 18:20:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 67d978ae-a93a-47fe-ae6c-a8d04d09580a 00:13:17.070 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:17.070 Nvme0n1 : 2.00 5842.00 22.82 0.00 0.00 0.00 0.00 0.00 00:13:17.070 =================================================================================================================== 00:13:17.070 Total : 5842.00 22.82 0.00 0.00 0.00 0.00 0.00 00:13:17.070 00:13:17.070 true 00:13:17.070 18:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67d978ae-a93a-47fe-ae6c-a8d04d09580a 00:13:17.070 18:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:17.328 18:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:17.328 18:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:17.328 18:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 69247 00:13:17.894 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:17.894 Nvme0n1 : 3.00 5842.00 22.82 0.00 0.00 0.00 0.00 0.00 00:13:17.894 =================================================================================================================== 00:13:17.894 Total : 5842.00 22.82 0.00 0.00 0.00 0.00 0.00 00:13:17.894 00:13:18.829 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:18.829 Nvme0n1 : 4.00 5842.00 22.82 0.00 0.00 0.00 0.00 0.00 00:13:18.829 =================================================================================================================== 00:13:18.829 Total : 5842.00 22.82 0.00 0.00 0.00 0.00 0.00 00:13:18.829 00:13:20.227 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:20.227 Nvme0n1 : 5.00 5842.00 22.82 0.00 0.00 0.00 0.00 0.00 00:13:20.227 =================================================================================================================== 00:13:20.227 Total : 5842.00 22.82 0.00 0.00 0.00 0.00 0.00 00:13:20.227 00:13:21.161 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:21.161 Nvme0n1 : 6.00 5842.00 22.82 0.00 0.00 0.00 0.00 0.00 00:13:21.161 =================================================================================================================== 00:13:21.161 Total : 5842.00 22.82 0.00 0.00 0.00 0.00 0.00 00:13:21.161 00:13:22.120 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:22.120 Nvme0n1 : 7.00 5805.71 22.68 0.00 0.00 0.00 0.00 0.00 00:13:22.120 =================================================================================================================== 00:13:22.120 Total : 5805.71 22.68 0.00 0.00 0.00 0.00 0.00 00:13:22.120 00:13:23.088 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:23.088 Nvme0n1 : 8.00 5810.25 22.70 0.00 0.00 0.00 0.00 0.00 00:13:23.088 =================================================================================================================== 00:13:23.088 Total : 5810.25 22.70 0.00 0.00 0.00 0.00 0.00 00:13:23.088 00:13:24.022 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:24.022 Nvme0n1 : 9.00 5799.67 22.65 0.00 0.00 0.00 0.00 0.00 00:13:24.022 =================================================================================================================== 00:13:24.022 Total : 5799.67 22.65 0.00 0.00 0.00 0.00 0.00 00:13:24.022 00:13:24.955 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:24.955 Nvme0n1 : 10.00 5791.20 22.62 0.00 0.00 0.00 0.00 0.00 00:13:24.955 =================================================================================================================== 00:13:24.955 Total : 5791.20 22.62 0.00 0.00 0.00 0.00 0.00 00:13:24.955 00:13:24.955 00:13:24.955 Latency(us) 00:13:24.955 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:24.955 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:24.955 Nvme0n1 : 10.01 5799.20 22.65 0.00 0.00 22063.90 18230.92 45994.36 00:13:24.955 =================================================================================================================== 00:13:24.955 Total : 5799.20 22.65 0.00 0.00 22063.90 18230.92 45994.36 00:13:24.955 0 00:13:24.955 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 69223 00:13:24.955 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 69223 ']' 00:13:24.955 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 69223 00:13:24.955 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:13:24.955 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:24.955 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69223 00:13:24.955 killing process with pid 69223 00:13:24.955 Received shutdown signal, test time was about 10.000000 seconds 00:13:24.955 00:13:24.955 Latency(us) 00:13:24.955 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:24.955 =================================================================================================================== 00:13:24.955 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:24.955 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:24.955 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:24.955 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69223' 00:13:24.955 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 69223 00:13:24.955 18:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 69223 00:13:26.330 18:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:26.330 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:26.588 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67d978ae-a93a-47fe-ae6c-a8d04d09580a 00:13:26.588 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:26.847 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:26.847 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:13:26.847 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:27.105 [2024-07-22 18:20:39.034242] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:27.105 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67d978ae-a93a-47fe-ae6c-a8d04d09580a 00:13:27.105 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:13:27.105 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67d978ae-a93a-47fe-ae6c-a8d04d09580a 00:13:27.105 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:27.105 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:27.105 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:27.105 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:27.105 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:27.105 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:27.105 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:27.106 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:27.106 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67d978ae-a93a-47fe-ae6c-a8d04d09580a 00:13:27.363 request: 00:13:27.364 { 00:13:27.364 "uuid": "67d978ae-a93a-47fe-ae6c-a8d04d09580a", 00:13:27.364 "method": "bdev_lvol_get_lvstores", 00:13:27.364 "req_id": 1 00:13:27.364 } 00:13:27.364 Got JSON-RPC error response 00:13:27.364 response: 00:13:27.364 { 00:13:27.364 "code": -19, 00:13:27.364 "message": "No such device" 00:13:27.364 } 00:13:27.364 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:13:27.364 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:27.364 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:27.364 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:27.364 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:27.622 aio_bdev 00:13:27.622 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 42647463-a374-4082-bd30-34b0fc77c457 00:13:27.622 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=42647463-a374-4082-bd30-34b0fc77c457 00:13:27.622 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:27.622 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:13:27.622 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:27.622 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:27.622 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:27.881 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 42647463-a374-4082-bd30-34b0fc77c457 -t 2000 00:13:28.139 [ 00:13:28.139 { 00:13:28.139 "name": "42647463-a374-4082-bd30-34b0fc77c457", 00:13:28.139 "aliases": [ 00:13:28.139 "lvs/lvol" 00:13:28.139 ], 00:13:28.139 "product_name": "Logical Volume", 00:13:28.139 "block_size": 4096, 00:13:28.139 "num_blocks": 38912, 00:13:28.139 "uuid": "42647463-a374-4082-bd30-34b0fc77c457", 00:13:28.139 "assigned_rate_limits": { 00:13:28.139 "rw_ios_per_sec": 0, 00:13:28.139 "rw_mbytes_per_sec": 0, 00:13:28.139 "r_mbytes_per_sec": 0, 00:13:28.139 "w_mbytes_per_sec": 0 00:13:28.139 }, 00:13:28.139 "claimed": false, 00:13:28.139 "zoned": false, 00:13:28.139 "supported_io_types": { 00:13:28.139 "read": true, 00:13:28.139 "write": true, 00:13:28.139 "unmap": true, 00:13:28.139 "flush": false, 00:13:28.139 "reset": true, 00:13:28.139 "nvme_admin": false, 00:13:28.139 "nvme_io": false, 00:13:28.139 "nvme_io_md": false, 00:13:28.139 "write_zeroes": true, 00:13:28.139 "zcopy": false, 00:13:28.139 "get_zone_info": false, 00:13:28.139 "zone_management": false, 00:13:28.139 "zone_append": false, 00:13:28.139 "compare": false, 00:13:28.139 "compare_and_write": false, 00:13:28.139 "abort": false, 00:13:28.139 "seek_hole": true, 00:13:28.139 "seek_data": true, 00:13:28.139 "copy": false, 00:13:28.139 "nvme_iov_md": false 00:13:28.139 }, 00:13:28.139 "driver_specific": { 00:13:28.139 "lvol": { 00:13:28.139 "lvol_store_uuid": "67d978ae-a93a-47fe-ae6c-a8d04d09580a", 00:13:28.139 "base_bdev": "aio_bdev", 00:13:28.139 "thin_provision": false, 00:13:28.139 "num_allocated_clusters": 38, 00:13:28.139 "snapshot": false, 00:13:28.139 "clone": false, 00:13:28.139 "esnap_clone": false 00:13:28.139 } 00:13:28.139 } 00:13:28.139 } 00:13:28.139 ] 00:13:28.139 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:13:28.139 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67d978ae-a93a-47fe-ae6c-a8d04d09580a 00:13:28.139 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:28.398 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:28.398 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67d978ae-a93a-47fe-ae6c-a8d04d09580a 00:13:28.398 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:28.964 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:28.964 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 42647463-a374-4082-bd30-34b0fc77c457 00:13:28.964 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 67d978ae-a93a-47fe-ae6c-a8d04d09580a 00:13:29.222 18:20:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:29.481 18:20:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:30.049 ************************************ 00:13:30.049 END TEST lvs_grow_clean 00:13:30.049 ************************************ 00:13:30.049 00:13:30.049 real 0m19.282s 00:13:30.049 user 0m18.197s 00:13:30.049 sys 0m2.515s 00:13:30.049 18:20:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:30.049 18:20:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:30.049 18:20:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:13:30.049 18:20:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:13:30.049 18:20:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:30.049 18:20:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:30.049 18:20:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:30.049 ************************************ 00:13:30.049 START TEST lvs_grow_dirty 00:13:30.049 ************************************ 00:13:30.049 18:20:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:13:30.049 18:20:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:30.049 18:20:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:30.049 18:20:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:30.049 18:20:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:30.049 18:20:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:30.049 18:20:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:30.049 18:20:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:30.049 18:20:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:30.049 18:20:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:30.307 18:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:30.307 18:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:30.564 18:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=ba7b82e2-c254-4614-968d-17531e71cc44 00:13:30.565 18:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba7b82e2-c254-4614-968d-17531e71cc44 00:13:30.565 18:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:30.821 18:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:30.821 18:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:30.821 18:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ba7b82e2-c254-4614-968d-17531e71cc44 lvol 150 00:13:31.078 18:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=039e41d0-4253-47ef-8fc3-fbe47c9f90c8 00:13:31.078 18:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:31.078 18:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:31.336 [2024-07-22 18:20:43.171369] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:31.337 [2024-07-22 18:20:43.171493] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:31.337 true 00:13:31.337 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba7b82e2-c254-4614-968d-17531e71cc44 00:13:31.337 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:31.595 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:31.595 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:31.853 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 039e41d0-4253-47ef-8fc3-fbe47c9f90c8 00:13:32.110 18:20:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:32.110 [2024-07-22 18:20:44.112127] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:32.369 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:32.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:32.629 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=69502 00:13:32.629 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:32.629 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:32.629 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 69502 /var/tmp/bdevperf.sock 00:13:32.629 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 69502 ']' 00:13:32.629 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:32.629 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:32.629 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:32.629 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:32.629 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:32.629 [2024-07-22 18:20:44.496890] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:13:32.629 [2024-07-22 18:20:44.497058] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69502 ] 00:13:32.887 [2024-07-22 18:20:44.665348] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.144 [2024-07-22 18:20:44.936351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:33.144 [2024-07-22 18:20:45.141554] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:33.710 18:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:33.710 18:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:13:33.710 18:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:33.710 Nvme0n1 00:13:33.710 18:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:33.968 [ 00:13:33.968 { 00:13:33.968 "name": "Nvme0n1", 00:13:33.968 "aliases": [ 00:13:33.968 "039e41d0-4253-47ef-8fc3-fbe47c9f90c8" 00:13:33.968 ], 00:13:33.968 "product_name": "NVMe disk", 00:13:33.968 "block_size": 4096, 00:13:33.968 "num_blocks": 38912, 00:13:33.968 "uuid": "039e41d0-4253-47ef-8fc3-fbe47c9f90c8", 00:13:33.968 "assigned_rate_limits": { 00:13:33.968 "rw_ios_per_sec": 0, 00:13:33.968 "rw_mbytes_per_sec": 0, 00:13:33.968 "r_mbytes_per_sec": 0, 00:13:33.968 "w_mbytes_per_sec": 0 00:13:33.968 }, 00:13:33.968 "claimed": false, 00:13:33.968 "zoned": false, 00:13:33.968 "supported_io_types": { 00:13:33.968 "read": true, 00:13:33.968 "write": true, 00:13:33.968 "unmap": true, 00:13:33.968 "flush": true, 00:13:33.968 "reset": true, 00:13:33.968 "nvme_admin": true, 00:13:33.968 "nvme_io": true, 00:13:33.968 "nvme_io_md": false, 00:13:33.968 "write_zeroes": true, 00:13:33.968 "zcopy": false, 00:13:33.968 "get_zone_info": false, 00:13:33.968 "zone_management": false, 00:13:33.968 "zone_append": false, 00:13:33.968 "compare": true, 00:13:33.968 "compare_and_write": true, 00:13:33.968 "abort": true, 00:13:33.968 "seek_hole": false, 00:13:33.968 "seek_data": false, 00:13:33.968 "copy": true, 00:13:33.968 "nvme_iov_md": false 00:13:33.968 }, 00:13:33.968 "memory_domains": [ 00:13:33.968 { 00:13:33.968 "dma_device_id": "system", 00:13:33.968 "dma_device_type": 1 00:13:33.968 } 00:13:33.968 ], 00:13:33.968 "driver_specific": { 00:13:33.968 "nvme": [ 00:13:33.968 { 00:13:33.968 "trid": { 00:13:33.968 "trtype": "TCP", 00:13:33.968 "adrfam": "IPv4", 00:13:33.968 "traddr": "10.0.0.2", 00:13:33.968 "trsvcid": "4420", 00:13:33.968 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:33.968 }, 00:13:33.968 "ctrlr_data": { 00:13:33.968 "cntlid": 1, 00:13:33.968 "vendor_id": "0x8086", 00:13:33.968 "model_number": "SPDK bdev Controller", 00:13:33.968 "serial_number": "SPDK0", 00:13:33.968 "firmware_revision": "24.09", 00:13:33.968 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:33.968 "oacs": { 00:13:33.968 "security": 0, 00:13:33.968 "format": 0, 00:13:33.968 "firmware": 0, 00:13:33.968 "ns_manage": 0 00:13:33.968 }, 00:13:33.968 "multi_ctrlr": true, 00:13:33.968 "ana_reporting": false 00:13:33.968 }, 00:13:33.968 "vs": { 00:13:33.968 "nvme_version": "1.3" 00:13:33.968 }, 00:13:33.968 "ns_data": { 00:13:33.968 "id": 1, 00:13:33.968 "can_share": true 00:13:33.968 } 00:13:33.968 } 00:13:33.968 ], 00:13:33.968 "mp_policy": "active_passive" 00:13:33.968 } 00:13:33.968 } 00:13:33.968 ] 00:13:33.968 18:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=69526 00:13:33.968 18:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:33.968 18:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:34.226 Running I/O for 10 seconds... 00:13:35.158 Latency(us) 00:13:35.158 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:35.158 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:35.158 Nvme0n1 : 1.00 5844.00 22.83 0.00 0.00 0.00 0.00 0.00 00:13:35.158 =================================================================================================================== 00:13:35.158 Total : 5844.00 22.83 0.00 0.00 0.00 0.00 0.00 00:13:35.158 00:13:36.090 18:20:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ba7b82e2-c254-4614-968d-17531e71cc44 00:13:36.348 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:36.348 Nvme0n1 : 2.00 5906.50 23.07 0.00 0.00 0.00 0.00 0.00 00:13:36.348 =================================================================================================================== 00:13:36.348 Total : 5906.50 23.07 0.00 0.00 0.00 0.00 0.00 00:13:36.348 00:13:36.348 true 00:13:36.348 18:20:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba7b82e2-c254-4614-968d-17531e71cc44 00:13:36.348 18:20:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:36.606 18:20:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:36.606 18:20:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:36.606 18:20:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 69526 00:13:37.170 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:37.170 Nvme0n1 : 3.00 5927.33 23.15 0.00 0.00 0.00 0.00 0.00 00:13:37.171 =================================================================================================================== 00:13:37.171 Total : 5927.33 23.15 0.00 0.00 0.00 0.00 0.00 00:13:37.171 00:13:38.544 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:38.544 Nvme0n1 : 4.00 5937.75 23.19 0.00 0.00 0.00 0.00 0.00 00:13:38.544 =================================================================================================================== 00:13:38.544 Total : 5937.75 23.19 0.00 0.00 0.00 0.00 0.00 00:13:38.544 00:13:39.123 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:39.123 Nvme0n1 : 5.00 5839.00 22.81 0.00 0.00 0.00 0.00 0.00 00:13:39.123 =================================================================================================================== 00:13:39.123 Total : 5839.00 22.81 0.00 0.00 0.00 0.00 0.00 00:13:39.123 00:13:40.495 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:40.495 Nvme0n1 : 6.00 5776.00 22.56 0.00 0.00 0.00 0.00 0.00 00:13:40.495 =================================================================================================================== 00:13:40.495 Total : 5776.00 22.56 0.00 0.00 0.00 0.00 0.00 00:13:40.495 00:13:41.427 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:41.427 Nvme0n1 : 7.00 5767.29 22.53 0.00 0.00 0.00 0.00 0.00 00:13:41.427 =================================================================================================================== 00:13:41.427 Total : 5767.29 22.53 0.00 0.00 0.00 0.00 0.00 00:13:41.427 00:13:42.360 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:42.360 Nvme0n1 : 8.00 5776.62 22.56 0.00 0.00 0.00 0.00 0.00 00:13:42.360 =================================================================================================================== 00:13:42.360 Total : 5776.62 22.56 0.00 0.00 0.00 0.00 0.00 00:13:42.360 00:13:43.294 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:43.294 Nvme0n1 : 9.00 5769.78 22.54 0.00 0.00 0.00 0.00 0.00 00:13:43.294 =================================================================================================================== 00:13:43.294 Total : 5769.78 22.54 0.00 0.00 0.00 0.00 0.00 00:13:43.294 00:13:44.230 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:44.230 Nvme0n1 : 10.00 5764.30 22.52 0.00 0.00 0.00 0.00 0.00 00:13:44.230 =================================================================================================================== 00:13:44.230 Total : 5764.30 22.52 0.00 0.00 0.00 0.00 0.00 00:13:44.230 00:13:44.230 00:13:44.230 Latency(us) 00:13:44.230 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:44.230 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:44.230 Nvme0n1 : 10.01 5772.03 22.55 0.00 0.00 22167.90 2472.49 91512.09 00:13:44.230 =================================================================================================================== 00:13:44.230 Total : 5772.03 22.55 0.00 0.00 22167.90 2472.49 91512.09 00:13:44.230 0 00:13:44.230 18:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 69502 00:13:44.230 18:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 69502 ']' 00:13:44.230 18:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 69502 00:13:44.230 18:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:13:44.230 18:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:44.230 18:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69502 00:13:44.230 killing process with pid 69502 00:13:44.230 Received shutdown signal, test time was about 10.000000 seconds 00:13:44.230 00:13:44.230 Latency(us) 00:13:44.230 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:44.230 =================================================================================================================== 00:13:44.230 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:44.230 18:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:44.230 18:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:44.230 18:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69502' 00:13:44.230 18:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 69502 00:13:44.230 18:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 69502 00:13:45.605 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:45.863 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:46.121 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:46.121 18:20:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba7b82e2-c254-4614-968d-17531e71cc44 00:13:46.380 18:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:46.380 18:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:13:46.380 18:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 69135 00:13:46.380 18:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 69135 00:13:46.380 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 69135 Killed "${NVMF_APP[@]}" "$@" 00:13:46.380 18:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:13:46.380 18:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:13:46.380 18:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:46.380 18:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:46.380 18:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:46.380 18:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=69671 00:13:46.380 18:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:46.380 18:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 69671 00:13:46.380 18:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 69671 ']' 00:13:46.380 18:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:46.380 18:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:46.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:46.380 18:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:46.380 18:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:46.380 18:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:46.380 [2024-07-22 18:20:58.326463] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:13:46.380 [2024-07-22 18:20:58.326653] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:46.638 [2024-07-22 18:20:58.498901] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.897 [2024-07-22 18:20:58.744636] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:46.897 [2024-07-22 18:20:58.744709] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:46.897 [2024-07-22 18:20:58.744745] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:46.897 [2024-07-22 18:20:58.744762] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:46.897 [2024-07-22 18:20:58.744775] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:46.897 [2024-07-22 18:20:58.744829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.155 [2024-07-22 18:20:58.951835] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:47.413 18:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:47.413 18:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:13:47.414 18:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:47.414 18:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:47.414 18:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:47.414 18:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:47.414 18:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:47.683 [2024-07-22 18:20:59.571413] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:13:47.683 [2024-07-22 18:20:59.572416] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:13:47.683 [2024-07-22 18:20:59.572762] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:13:47.683 18:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:13:47.683 18:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 039e41d0-4253-47ef-8fc3-fbe47c9f90c8 00:13:47.683 18:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=039e41d0-4253-47ef-8fc3-fbe47c9f90c8 00:13:47.683 18:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:47.683 18:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:13:47.683 18:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:47.683 18:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:47.683 18:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:47.941 18:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 039e41d0-4253-47ef-8fc3-fbe47c9f90c8 -t 2000 00:13:48.199 [ 00:13:48.199 { 00:13:48.199 "name": "039e41d0-4253-47ef-8fc3-fbe47c9f90c8", 00:13:48.199 "aliases": [ 00:13:48.199 "lvs/lvol" 00:13:48.199 ], 00:13:48.199 "product_name": "Logical Volume", 00:13:48.199 "block_size": 4096, 00:13:48.199 "num_blocks": 38912, 00:13:48.199 "uuid": "039e41d0-4253-47ef-8fc3-fbe47c9f90c8", 00:13:48.199 "assigned_rate_limits": { 00:13:48.199 "rw_ios_per_sec": 0, 00:13:48.199 "rw_mbytes_per_sec": 0, 00:13:48.199 "r_mbytes_per_sec": 0, 00:13:48.199 "w_mbytes_per_sec": 0 00:13:48.199 }, 00:13:48.199 "claimed": false, 00:13:48.199 "zoned": false, 00:13:48.199 "supported_io_types": { 00:13:48.199 "read": true, 00:13:48.199 "write": true, 00:13:48.199 "unmap": true, 00:13:48.199 "flush": false, 00:13:48.199 "reset": true, 00:13:48.199 "nvme_admin": false, 00:13:48.199 "nvme_io": false, 00:13:48.199 "nvme_io_md": false, 00:13:48.199 "write_zeroes": true, 00:13:48.199 "zcopy": false, 00:13:48.199 "get_zone_info": false, 00:13:48.199 "zone_management": false, 00:13:48.199 "zone_append": false, 00:13:48.199 "compare": false, 00:13:48.199 "compare_and_write": false, 00:13:48.199 "abort": false, 00:13:48.199 "seek_hole": true, 00:13:48.199 "seek_data": true, 00:13:48.199 "copy": false, 00:13:48.199 "nvme_iov_md": false 00:13:48.199 }, 00:13:48.199 "driver_specific": { 00:13:48.199 "lvol": { 00:13:48.199 "lvol_store_uuid": "ba7b82e2-c254-4614-968d-17531e71cc44", 00:13:48.199 "base_bdev": "aio_bdev", 00:13:48.199 "thin_provision": false, 00:13:48.199 "num_allocated_clusters": 38, 00:13:48.199 "snapshot": false, 00:13:48.199 "clone": false, 00:13:48.199 "esnap_clone": false 00:13:48.199 } 00:13:48.199 } 00:13:48.199 } 00:13:48.199 ] 00:13:48.199 18:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:13:48.199 18:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:13:48.199 18:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba7b82e2-c254-4614-968d-17531e71cc44 00:13:48.458 18:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:13:48.458 18:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba7b82e2-c254-4614-968d-17531e71cc44 00:13:48.458 18:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:13:48.717 18:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:13:48.717 18:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:48.975 [2024-07-22 18:21:00.868534] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:48.975 18:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba7b82e2-c254-4614-968d-17531e71cc44 00:13:48.975 18:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:13:48.975 18:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba7b82e2-c254-4614-968d-17531e71cc44 00:13:48.975 18:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:48.975 18:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:48.975 18:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:48.975 18:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:48.975 18:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:48.975 18:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:48.975 18:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:48.976 18:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:48.976 18:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba7b82e2-c254-4614-968d-17531e71cc44 00:13:49.234 request: 00:13:49.234 { 00:13:49.234 "uuid": "ba7b82e2-c254-4614-968d-17531e71cc44", 00:13:49.234 "method": "bdev_lvol_get_lvstores", 00:13:49.234 "req_id": 1 00:13:49.234 } 00:13:49.234 Got JSON-RPC error response 00:13:49.234 response: 00:13:49.234 { 00:13:49.234 "code": -19, 00:13:49.234 "message": "No such device" 00:13:49.234 } 00:13:49.234 18:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:13:49.234 18:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:49.234 18:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:49.234 18:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:49.234 18:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:49.492 aio_bdev 00:13:49.492 18:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 039e41d0-4253-47ef-8fc3-fbe47c9f90c8 00:13:49.492 18:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=039e41d0-4253-47ef-8fc3-fbe47c9f90c8 00:13:49.492 18:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:49.492 18:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:13:49.492 18:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:49.492 18:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:49.492 18:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:49.751 18:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 039e41d0-4253-47ef-8fc3-fbe47c9f90c8 -t 2000 00:13:50.009 [ 00:13:50.009 { 00:13:50.009 "name": "039e41d0-4253-47ef-8fc3-fbe47c9f90c8", 00:13:50.009 "aliases": [ 00:13:50.009 "lvs/lvol" 00:13:50.009 ], 00:13:50.009 "product_name": "Logical Volume", 00:13:50.009 "block_size": 4096, 00:13:50.009 "num_blocks": 38912, 00:13:50.009 "uuid": "039e41d0-4253-47ef-8fc3-fbe47c9f90c8", 00:13:50.009 "assigned_rate_limits": { 00:13:50.009 "rw_ios_per_sec": 0, 00:13:50.009 "rw_mbytes_per_sec": 0, 00:13:50.009 "r_mbytes_per_sec": 0, 00:13:50.009 "w_mbytes_per_sec": 0 00:13:50.009 }, 00:13:50.009 "claimed": false, 00:13:50.009 "zoned": false, 00:13:50.009 "supported_io_types": { 00:13:50.009 "read": true, 00:13:50.009 "write": true, 00:13:50.009 "unmap": true, 00:13:50.009 "flush": false, 00:13:50.009 "reset": true, 00:13:50.009 "nvme_admin": false, 00:13:50.009 "nvme_io": false, 00:13:50.009 "nvme_io_md": false, 00:13:50.009 "write_zeroes": true, 00:13:50.009 "zcopy": false, 00:13:50.009 "get_zone_info": false, 00:13:50.009 "zone_management": false, 00:13:50.009 "zone_append": false, 00:13:50.009 "compare": false, 00:13:50.009 "compare_and_write": false, 00:13:50.009 "abort": false, 00:13:50.009 "seek_hole": true, 00:13:50.009 "seek_data": true, 00:13:50.009 "copy": false, 00:13:50.009 "nvme_iov_md": false 00:13:50.009 }, 00:13:50.009 "driver_specific": { 00:13:50.009 "lvol": { 00:13:50.009 "lvol_store_uuid": "ba7b82e2-c254-4614-968d-17531e71cc44", 00:13:50.009 "base_bdev": "aio_bdev", 00:13:50.009 "thin_provision": false, 00:13:50.009 "num_allocated_clusters": 38, 00:13:50.009 "snapshot": false, 00:13:50.009 "clone": false, 00:13:50.009 "esnap_clone": false 00:13:50.009 } 00:13:50.009 } 00:13:50.009 } 00:13:50.009 ] 00:13:50.009 18:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:13:50.009 18:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba7b82e2-c254-4614-968d-17531e71cc44 00:13:50.009 18:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:50.267 18:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:50.267 18:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:50.267 18:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba7b82e2-c254-4614-968d-17531e71cc44 00:13:50.598 18:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:50.598 18:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 039e41d0-4253-47ef-8fc3-fbe47c9f90c8 00:13:50.860 18:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ba7b82e2-c254-4614-968d-17531e71cc44 00:13:51.118 18:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:51.376 18:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:51.634 ************************************ 00:13:51.634 END TEST lvs_grow_dirty 00:13:51.634 ************************************ 00:13:51.634 00:13:51.634 real 0m21.608s 00:13:51.634 user 0m47.569s 00:13:51.634 sys 0m7.605s 00:13:51.634 18:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:51.634 18:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:51.634 18:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:13:51.634 18:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:13:51.634 18:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:13:51.634 18:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:13:51.634 18:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:13:51.634 18:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:51.634 18:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:13:51.634 18:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:13:51.634 18:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:13:51.634 18:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:51.634 nvmf_trace.0 00:13:51.634 18:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:13:51.634 18:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:13:51.634 18:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:51.634 18:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:13:52.200 18:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:52.200 18:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:13:52.200 18:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:52.200 18:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:52.200 rmmod nvme_tcp 00:13:52.200 rmmod nvme_fabrics 00:13:52.200 rmmod nvme_keyring 00:13:52.200 18:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:52.200 18:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:13:52.200 18:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:13:52.200 18:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 69671 ']' 00:13:52.200 18:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 69671 00:13:52.200 18:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 69671 ']' 00:13:52.200 18:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 69671 00:13:52.200 18:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:13:52.200 18:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:52.200 18:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69671 00:13:52.200 killing process with pid 69671 00:13:52.200 18:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:52.200 18:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:52.200 18:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69671' 00:13:52.200 18:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 69671 00:13:52.200 18:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 69671 00:13:53.575 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:53.575 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:53.575 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:53.575 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:53.575 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:53.575 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:53.575 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:53.575 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:53.575 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:53.575 00:13:53.575 real 0m44.558s 00:13:53.575 user 1m13.261s 00:13:53.575 sys 0m11.168s 00:13:53.575 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:53.575 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:53.575 ************************************ 00:13:53.575 END TEST nvmf_lvs_grow 00:13:53.575 ************************************ 00:13:53.575 18:21:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:13:53.575 18:21:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:13:53.575 18:21:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:53.575 18:21:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:53.575 18:21:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:53.575 ************************************ 00:13:53.575 START TEST nvmf_bdev_io_wait 00:13:53.575 ************************************ 00:13:53.575 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:13:53.575 * Looking for test storage... 00:13:53.575 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:53.575 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:53.575 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:13:53.575 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:53.575 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:53.575 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:53.575 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:53.575 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:53.575 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:53.575 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:53.575 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:53.575 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:53.575 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:53.575 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:13:53.575 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=1e224894-a0fc-4112-b81b-a37606f50c96 00:13:53.575 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:53.576 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:53.576 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:53.576 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:53.576 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:53.576 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:53.576 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:53.576 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:53.576 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.576 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.576 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.576 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:13:53.576 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.576 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:13:53.576 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:53.576 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:53.576 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:53.576 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:53.576 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:53.576 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:53.576 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:53.576 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:53.576 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:53.576 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:53.576 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:13:53.576 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:53.576 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:53.576 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:53.576 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:53.576 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:53.576 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:53.576 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:53.576 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:53.576 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:53.576 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:53.576 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:53.576 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:53.576 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:53.576 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:53.576 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:53.576 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:53.576 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:53.576 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:53.576 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:53.576 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:53.576 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:53.576 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:53.576 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:53.576 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:53.576 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:53.576 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:53.576 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:53.576 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:53.576 Cannot find device "nvmf_tgt_br" 00:13:53.576 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:13:53.576 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:53.576 Cannot find device "nvmf_tgt_br2" 00:13:53.576 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:13:53.576 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:53.576 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:53.576 Cannot find device "nvmf_tgt_br" 00:13:53.576 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:13:53.576 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:53.576 Cannot find device "nvmf_tgt_br2" 00:13:53.576 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:13:53.576 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:53.576 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:53.835 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:53.835 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:53.835 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:13:53.835 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:53.835 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:53.835 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:13:53.835 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:53.835 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:53.835 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:53.835 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:53.835 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:53.835 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:53.835 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:53.835 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:53.835 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:53.835 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:53.835 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:53.835 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:53.835 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:53.835 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:53.835 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:53.835 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:53.835 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:53.835 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:53.835 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:53.835 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:53.835 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:53.835 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:53.835 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:53.835 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:53.835 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:53.835 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.113 ms 00:13:53.835 00:13:53.835 --- 10.0.0.2 ping statistics --- 00:13:53.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:53.835 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:13:53.835 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:53.835 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:53.835 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:13:53.835 00:13:53.835 --- 10.0.0.3 ping statistics --- 00:13:53.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:53.835 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:13:53.835 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:53.835 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:53.835 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:13:53.835 00:13:53.835 --- 10.0.0.1 ping statistics --- 00:13:53.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:53.835 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:13:53.835 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:53.835 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:13:53.835 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:53.835 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:53.835 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:53.835 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:53.835 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:53.835 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:53.835 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:53.835 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:13:53.835 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:53.836 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:53.836 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:53.836 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=70000 00:13:53.836 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 70000 00:13:53.836 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 70000 ']' 00:13:53.836 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:53.836 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:53.836 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:13:53.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:53.836 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:53.836 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:53.836 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:54.094 [2024-07-22 18:21:05.968171] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:13:54.094 [2024-07-22 18:21:05.968381] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:54.352 [2024-07-22 18:21:06.148322] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:54.610 [2024-07-22 18:21:06.470481] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:54.610 [2024-07-22 18:21:06.470596] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:54.610 [2024-07-22 18:21:06.470629] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:54.610 [2024-07-22 18:21:06.470658] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:54.610 [2024-07-22 18:21:06.470685] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:54.610 [2024-07-22 18:21:06.470958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:54.610 [2024-07-22 18:21:06.471355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:54.610 [2024-07-22 18:21:06.471913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:54.610 [2024-07-22 18:21:06.471940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.175 18:21:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:55.175 18:21:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:13:55.175 18:21:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:55.175 18:21:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:55.175 18:21:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:55.175 18:21:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:55.175 18:21:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:13:55.175 18:21:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.175 18:21:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:55.175 18:21:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.175 18:21:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:13:55.175 18:21:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.175 18:21:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:55.175 [2024-07-22 18:21:07.170676] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:55.175 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.175 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:55.175 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.175 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:55.434 [2024-07-22 18:21:07.194175] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:55.434 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.434 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:55.434 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.434 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:55.434 Malloc0 00:13:55.434 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.434 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:55.434 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.434 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:55.434 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.434 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:55.434 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.434 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:55.434 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.434 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:55.434 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.434 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:55.434 [2024-07-22 18:21:07.325463] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:55.434 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.434 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=70042 00:13:55.434 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:13:55.434 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:13:55.434 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:55.434 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:55.434 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=70044 00:13:55.434 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:55.434 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:55.434 { 00:13:55.434 "params": { 00:13:55.434 "name": "Nvme$subsystem", 00:13:55.434 "trtype": "$TEST_TRANSPORT", 00:13:55.434 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:55.434 "adrfam": "ipv4", 00:13:55.434 "trsvcid": "$NVMF_PORT", 00:13:55.434 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:55.434 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:55.434 "hdgst": ${hdgst:-false}, 00:13:55.434 "ddgst": ${ddgst:-false} 00:13:55.434 }, 00:13:55.434 "method": "bdev_nvme_attach_controller" 00:13:55.434 } 00:13:55.434 EOF 00:13:55.434 )") 00:13:55.434 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:13:55.434 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:13:55.434 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:55.434 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:55.434 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=70046 00:13:55.434 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:55.434 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:55.434 { 00:13:55.434 "params": { 00:13:55.434 "name": "Nvme$subsystem", 00:13:55.434 "trtype": "$TEST_TRANSPORT", 00:13:55.434 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:55.434 "adrfam": "ipv4", 00:13:55.434 "trsvcid": "$NVMF_PORT", 00:13:55.434 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:55.434 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:55.434 "hdgst": ${hdgst:-false}, 00:13:55.434 "ddgst": ${ddgst:-false} 00:13:55.434 }, 00:13:55.434 "method": "bdev_nvme_attach_controller" 00:13:55.434 } 00:13:55.434 EOF 00:13:55.434 )") 00:13:55.435 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:13:55.435 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:55.435 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=70049 00:13:55.435 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:13:55.435 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:55.435 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:13:55.435 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:55.435 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:55.435 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:55.435 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:13:55.435 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:55.435 { 00:13:55.435 "params": { 00:13:55.435 "name": "Nvme$subsystem", 00:13:55.435 "trtype": "$TEST_TRANSPORT", 00:13:55.435 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:55.435 "adrfam": "ipv4", 00:13:55.435 "trsvcid": "$NVMF_PORT", 00:13:55.435 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:55.435 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:55.435 "hdgst": ${hdgst:-false}, 00:13:55.435 "ddgst": ${ddgst:-false} 00:13:55.435 }, 00:13:55.435 "method": "bdev_nvme_attach_controller" 00:13:55.435 } 00:13:55.435 EOF 00:13:55.435 )") 00:13:55.435 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:55.435 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:55.435 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:55.435 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:55.435 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:55.435 "params": { 00:13:55.435 "name": "Nvme1", 00:13:55.435 "trtype": "tcp", 00:13:55.435 "traddr": "10.0.0.2", 00:13:55.435 "adrfam": "ipv4", 00:13:55.435 "trsvcid": "4420", 00:13:55.435 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:55.435 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:55.435 "hdgst": false, 00:13:55.435 "ddgst": false 00:13:55.435 }, 00:13:55.435 "method": "bdev_nvme_attach_controller" 00:13:55.435 }' 00:13:55.435 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:55.435 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:55.435 "params": { 00:13:55.435 "name": "Nvme1", 00:13:55.435 "trtype": "tcp", 00:13:55.435 "traddr": "10.0.0.2", 00:13:55.435 "adrfam": "ipv4", 00:13:55.435 "trsvcid": "4420", 00:13:55.435 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:55.435 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:55.435 "hdgst": false, 00:13:55.435 "ddgst": false 00:13:55.435 }, 00:13:55.435 "method": "bdev_nvme_attach_controller" 00:13:55.435 }' 00:13:55.435 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:13:55.435 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:55.435 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:55.435 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:55.435 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:55.435 { 00:13:55.435 "params": { 00:13:55.435 "name": "Nvme$subsystem", 00:13:55.435 "trtype": "$TEST_TRANSPORT", 00:13:55.435 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:55.435 "adrfam": "ipv4", 00:13:55.435 "trsvcid": "$NVMF_PORT", 00:13:55.435 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:55.435 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:55.435 "hdgst": ${hdgst:-false}, 00:13:55.435 "ddgst": ${ddgst:-false} 00:13:55.435 }, 00:13:55.435 "method": "bdev_nvme_attach_controller" 00:13:55.435 } 00:13:55.435 EOF 00:13:55.435 )") 00:13:55.435 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:55.435 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:55.435 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:55.435 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:55.435 "params": { 00:13:55.435 "name": "Nvme1", 00:13:55.435 "trtype": "tcp", 00:13:55.435 "traddr": "10.0.0.2", 00:13:55.435 "adrfam": "ipv4", 00:13:55.435 "trsvcid": "4420", 00:13:55.435 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:55.435 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:55.435 "hdgst": false, 00:13:55.435 "ddgst": false 00:13:55.435 }, 00:13:55.435 "method": "bdev_nvme_attach_controller" 00:13:55.435 }' 00:13:55.435 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:55.435 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:55.435 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:55.435 "params": { 00:13:55.435 "name": "Nvme1", 00:13:55.435 "trtype": "tcp", 00:13:55.435 "traddr": "10.0.0.2", 00:13:55.435 "adrfam": "ipv4", 00:13:55.435 "trsvcid": "4420", 00:13:55.435 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:55.435 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:55.435 "hdgst": false, 00:13:55.435 "ddgst": false 00:13:55.435 }, 00:13:55.435 "method": "bdev_nvme_attach_controller" 00:13:55.435 }' 00:13:55.435 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 70042 00:13:55.435 [2024-07-22 18:21:07.443990] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:13:55.435 [2024-07-22 18:21:07.444958] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:13:55.694 [2024-07-22 18:21:07.453377] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:13:55.694 [2024-07-22 18:21:07.453520] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:13:55.694 [2024-07-22 18:21:07.482883] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:13:55.694 [2024-07-22 18:21:07.483053] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:13:55.694 [2024-07-22 18:21:07.500047] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:13:55.694 [2024-07-22 18:21:07.500277] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:13:55.694 [2024-07-22 18:21:07.689235] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:55.952 [2024-07-22 18:21:07.772286] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:55.952 [2024-07-22 18:21:07.844471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:55.952 [2024-07-22 18:21:07.907981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:13:55.952 [2024-07-22 18:21:07.929751] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.210 [2024-07-22 18:21:08.026436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:13:56.210 [2024-07-22 18:21:08.060176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:56.210 [2024-07-22 18:21:08.103817] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:56.210 [2024-07-22 18:21:08.149568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:13:56.210 [2024-07-22 18:21:08.227120] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:56.469 [2024-07-22 18:21:08.259304] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:56.469 Running I/O for 1 seconds... 00:13:56.469 [2024-07-22 18:21:08.347533] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:56.469 Running I/O for 1 seconds... 00:13:56.469 Running I/O for 1 seconds... 00:13:56.728 Running I/O for 1 seconds... 00:13:57.293 00:13:57.293 Latency(us) 00:13:57.293 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:57.293 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:13:57.293 Nvme1n1 : 1.02 4850.49 18.95 0.00 0.00 26148.58 7745.16 60769.75 00:13:57.293 =================================================================================================================== 00:13:57.293 Total : 4850.49 18.95 0.00 0.00 26148.58 7745.16 60769.75 00:13:57.552 00:13:57.552 Latency(us) 00:13:57.552 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:57.552 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:13:57.552 Nvme1n1 : 1.01 4540.92 17.74 0.00 0.00 28035.22 10307.03 59339.87 00:13:57.552 =================================================================================================================== 00:13:57.552 Total : 4540.92 17.74 0.00 0.00 28035.22 10307.03 59339.87 00:13:57.552 00:13:57.552 Latency(us) 00:13:57.552 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:57.552 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:13:57.552 Nvme1n1 : 1.01 6632.28 25.91 0.00 0.00 19180.81 3261.91 27644.28 00:13:57.552 =================================================================================================================== 00:13:57.552 Total : 6632.28 25.91 0.00 0.00 19180.81 3261.91 27644.28 00:13:57.552 00:13:57.552 Latency(us) 00:13:57.552 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:57.552 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:13:57.552 Nvme1n1 : 1.00 127608.41 498.47 0.00 0.00 999.69 480.35 1586.27 00:13:57.552 =================================================================================================================== 00:13:57.552 Total : 127608.41 498.47 0.00 0.00 999.69 480.35 1586.27 00:13:58.486 18:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 70044 00:13:58.744 18:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 70046 00:13:58.744 18:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 70049 00:13:58.744 18:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:58.744 18:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.744 18:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:58.744 18:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.744 18:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:13:58.744 18:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:13:58.744 18:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:58.744 18:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:13:58.744 18:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:58.744 18:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:13:58.744 18:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:58.744 18:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:58.744 rmmod nvme_tcp 00:13:58.744 rmmod nvme_fabrics 00:13:58.744 rmmod nvme_keyring 00:13:58.744 18:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:58.744 18:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:13:58.744 18:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:13:58.744 18:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 70000 ']' 00:13:58.744 18:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 70000 00:13:58.744 18:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 70000 ']' 00:13:58.744 18:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 70000 00:13:58.744 18:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:13:58.744 18:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:58.744 18:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 70000 00:13:58.744 18:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:58.744 18:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:58.744 killing process with pid 70000 00:13:58.744 18:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 70000' 00:13:58.744 18:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 70000 00:13:58.744 18:21:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 70000 00:14:00.122 18:21:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:00.122 18:21:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:00.122 18:21:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:00.122 18:21:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:00.122 18:21:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:00.122 18:21:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:00.122 18:21:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:00.122 18:21:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.122 18:21:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:00.122 00:14:00.122 real 0m6.558s 00:14:00.122 user 0m29.953s 00:14:00.122 sys 0m2.710s 00:14:00.122 18:21:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:00.122 18:21:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:00.122 ************************************ 00:14:00.122 END TEST nvmf_bdev_io_wait 00:14:00.122 ************************************ 00:14:00.122 18:21:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:14:00.122 18:21:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:00.122 18:21:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:00.122 18:21:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:00.122 18:21:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:00.122 ************************************ 00:14:00.122 START TEST nvmf_queue_depth 00:14:00.122 ************************************ 00:14:00.122 18:21:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:00.122 * Looking for test storage... 00:14:00.122 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:00.122 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:00.122 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:14:00.122 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:00.122 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:00.122 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:00.122 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:00.122 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:00.122 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:00.122 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:00.122 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:00.122 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:00.122 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:00.122 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:14:00.122 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=1e224894-a0fc-4112-b81b-a37606f50c96 00:14:00.122 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:00.122 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:00.122 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:00.122 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:00.122 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:00.122 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:00.122 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:00.122 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:00.122 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.122 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.122 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.122 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:14:00.122 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.122 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:14:00.122 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:00.122 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:00.122 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:00.122 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:00.122 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:00.122 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:00.122 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:00.122 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:00.122 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:14:00.122 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:14:00.122 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:00.122 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:14:00.122 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:00.122 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:00.122 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:00.122 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:00.122 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:00.122 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:00.122 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:00.122 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.122 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:00.122 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:00.122 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:00.122 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:00.122 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:00.122 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:00.122 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:00.122 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:00.122 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:00.122 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:00.123 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:00.123 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:00.123 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:00.123 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:00.123 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:00.123 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:00.123 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:00.123 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:00.123 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:00.123 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:00.381 Cannot find device "nvmf_tgt_br" 00:14:00.381 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:14:00.381 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:00.381 Cannot find device "nvmf_tgt_br2" 00:14:00.381 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:14:00.381 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:00.381 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:00.381 Cannot find device "nvmf_tgt_br" 00:14:00.381 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:14:00.381 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:00.381 Cannot find device "nvmf_tgt_br2" 00:14:00.381 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:14:00.381 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:00.381 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:00.381 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:00.381 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:00.382 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:14:00.382 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:00.382 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:00.382 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:14:00.382 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:00.382 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:00.382 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:00.382 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:00.382 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:00.382 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:00.382 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:00.382 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:00.382 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:00.382 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:00.382 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:00.382 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:00.382 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:00.382 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:00.382 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:00.382 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:00.382 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:00.382 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:00.382 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:00.382 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:00.382 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:00.382 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:00.641 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:00.641 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:00.641 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:00.641 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:14:00.641 00:14:00.641 --- 10.0.0.2 ping statistics --- 00:14:00.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.641 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:14:00.641 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:00.641 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:00.641 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:14:00.641 00:14:00.641 --- 10.0.0.3 ping statistics --- 00:14:00.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.641 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:14:00.641 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:00.641 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:00.641 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:14:00.641 00:14:00.641 --- 10.0.0.1 ping statistics --- 00:14:00.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.641 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:14:00.641 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:00.641 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:14:00.641 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:00.642 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:00.642 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:00.642 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:00.642 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:00.642 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:00.642 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:00.642 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:14:00.642 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:00.642 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:00.642 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:00.642 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=70313 00:14:00.642 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:00.642 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 70313 00:14:00.642 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 70313 ']' 00:14:00.642 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:00.642 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:00.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:00.642 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:00.642 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:00.642 18:21:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:00.642 [2024-07-22 18:21:12.567064] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:14:00.642 [2024-07-22 18:21:12.567244] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:00.900 [2024-07-22 18:21:12.739555] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.159 [2024-07-22 18:21:13.001072] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:01.159 [2024-07-22 18:21:13.001152] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:01.159 [2024-07-22 18:21:13.001170] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:01.159 [2024-07-22 18:21:13.001186] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:01.159 [2024-07-22 18:21:13.001198] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:01.159 [2024-07-22 18:21:13.001276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:01.427 [2024-07-22 18:21:13.200584] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:01.685 18:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:01.685 18:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:14:01.685 18:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:01.685 18:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:01.685 18:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:01.685 18:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:01.685 18:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:01.685 18:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.685 18:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:01.685 [2024-07-22 18:21:13.523494] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:01.685 18:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.685 18:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:01.685 18:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.685 18:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:01.685 Malloc0 00:14:01.685 18:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.685 18:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:01.685 18:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.685 18:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:01.685 18:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.685 18:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:01.685 18:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.685 18:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:01.685 18:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.685 18:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:01.685 18:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.685 18:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:01.685 [2024-07-22 18:21:13.637553] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:01.685 18:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.685 18:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=70345 00:14:01.685 18:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:14:01.685 18:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:01.685 18:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 70345 /var/tmp/bdevperf.sock 00:14:01.685 18:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 70345 ']' 00:14:01.685 18:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:01.685 18:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:01.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:01.685 18:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:01.685 18:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:01.685 18:21:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:01.943 [2024-07-22 18:21:13.752316] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:14:01.943 [2024-07-22 18:21:13.752489] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70345 ] 00:14:01.943 [2024-07-22 18:21:13.928621] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.202 [2024-07-22 18:21:14.166080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.460 [2024-07-22 18:21:14.367404] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:02.720 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:02.720 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:14:02.720 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:02.720 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.720 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:02.981 NVMe0n1 00:14:02.981 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.981 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:02.981 Running I/O for 10 seconds... 00:14:15.175 00:14:15.175 Latency(us) 00:14:15.175 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:15.175 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:14:15.175 Verification LBA range: start 0x0 length 0x4000 00:14:15.175 NVMe0n1 : 10.14 5846.42 22.84 0.00 0.00 174025.47 27763.43 123922.62 00:14:15.175 =================================================================================================================== 00:14:15.175 Total : 5846.42 22.84 0.00 0.00 174025.47 27763.43 123922.62 00:14:15.175 0 00:14:15.175 18:21:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 70345 00:14:15.175 18:21:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 70345 ']' 00:14:15.175 18:21:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 70345 00:14:15.175 18:21:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:14:15.175 18:21:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:15.175 18:21:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 70345 00:14:15.175 18:21:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:15.175 18:21:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:15.175 18:21:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 70345' 00:14:15.175 killing process with pid 70345 00:14:15.175 18:21:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 70345 00:14:15.175 Received shutdown signal, test time was about 10.000000 seconds 00:14:15.175 00:14:15.175 Latency(us) 00:14:15.175 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:15.175 =================================================================================================================== 00:14:15.175 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:15.176 18:21:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 70345 00:14:15.176 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:15.176 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:14:15.176 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:15.176 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:14:15.176 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:15.176 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:14:15.176 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:15.176 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:15.176 rmmod nvme_tcp 00:14:15.176 rmmod nvme_fabrics 00:14:15.176 rmmod nvme_keyring 00:14:15.176 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:15.176 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:14:15.176 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:14:15.176 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 70313 ']' 00:14:15.176 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 70313 00:14:15.176 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 70313 ']' 00:14:15.176 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 70313 00:14:15.176 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:14:15.176 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:15.176 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 70313 00:14:15.176 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:15.176 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:15.176 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 70313' 00:14:15.176 killing process with pid 70313 00:14:15.176 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 70313 00:14:15.176 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 70313 00:14:16.106 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:16.106 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:16.106 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:16.106 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:16.106 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:16.106 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:16.106 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:16.106 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:16.106 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:16.106 00:14:16.106 real 0m15.819s 00:14:16.106 user 0m26.686s 00:14:16.106 sys 0m2.334s 00:14:16.106 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:16.106 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:16.106 ************************************ 00:14:16.106 END TEST nvmf_queue_depth 00:14:16.106 ************************************ 00:14:16.106 18:21:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:14:16.106 18:21:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:16.107 18:21:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:16.107 18:21:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:16.107 18:21:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:16.107 ************************************ 00:14:16.107 START TEST nvmf_target_multipath 00:14:16.107 ************************************ 00:14:16.107 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:16.107 * Looking for test storage... 00:14:16.107 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:16.107 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:16.107 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:14:16.107 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:16.107 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:16.107 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:16.107 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:16.107 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:16.107 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:16.107 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:16.107 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:16.107 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:16.107 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:16.107 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:14:16.107 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=1e224894-a0fc-4112-b81b-a37606f50c96 00:14:16.107 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:16.107 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:16.107 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:16.107 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:16.107 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:16.107 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:16.107 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:16.107 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:16.107 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.107 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.107 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.107 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:14:16.107 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.107 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:14:16.107 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:16.107 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:16.107 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:16.107 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:16.107 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:16.107 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:16.107 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:16.107 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:16.107 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:16.107 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:16.107 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:14:16.107 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:16.107 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:14:16.107 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:16.107 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:16.107 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:16.107 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:16.107 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:16.107 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:16.107 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:16.107 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:16.107 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:16.107 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:16.107 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:16.108 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:16.108 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:16.108 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:16.108 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:16.108 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:16.108 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:16.108 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:16.108 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:16.108 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:16.108 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:16.108 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:16.108 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:16.108 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:16.108 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:16.108 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:16.108 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:16.108 18:21:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:16.108 Cannot find device "nvmf_tgt_br" 00:14:16.108 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:14:16.108 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:16.108 Cannot find device "nvmf_tgt_br2" 00:14:16.108 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:14:16.108 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:16.108 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:16.108 Cannot find device "nvmf_tgt_br" 00:14:16.108 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:14:16.108 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:16.108 Cannot find device "nvmf_tgt_br2" 00:14:16.108 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:14:16.108 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:16.108 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:16.108 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:16.108 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:16.108 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:14:16.108 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:16.108 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:16.108 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:14:16.108 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:16.366 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:16.366 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:16.366 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:16.366 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:16.366 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:16.366 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:16.366 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:16.366 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:16.366 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:16.366 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:16.366 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:16.366 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:16.366 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:16.366 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:16.366 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:16.366 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:16.366 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:16.366 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:16.366 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:16.366 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:16.366 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:16.366 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:16.366 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:16.366 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:16.366 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:14:16.366 00:14:16.366 --- 10.0.0.2 ping statistics --- 00:14:16.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.366 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:14:16.366 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:16.366 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:16.366 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:14:16.366 00:14:16.366 --- 10.0.0.3 ping statistics --- 00:14:16.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.366 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:14:16.366 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:16.366 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:16.366 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:14:16.366 00:14:16.366 --- 10.0.0.1 ping statistics --- 00:14:16.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.366 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:14:16.366 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:16.366 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:14:16.366 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:16.366 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:16.366 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:16.366 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:16.366 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:16.366 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:16.367 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:16.367 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:14:16.367 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:14:16.367 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:14:16.367 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:16.367 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:16.367 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:16.367 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=70688 00:14:16.367 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:16.367 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 70688 00:14:16.367 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@829 -- # '[' -z 70688 ']' 00:14:16.367 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:16.367 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:16.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:16.367 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:16.367 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:16.367 18:21:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:16.624 [2024-07-22 18:21:28.481037] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:14:16.624 [2024-07-22 18:21:28.481235] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:16.883 [2024-07-22 18:21:28.667764] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:17.141 [2024-07-22 18:21:28.971001] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:17.141 [2024-07-22 18:21:28.971106] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:17.141 [2024-07-22 18:21:28.971124] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:17.141 [2024-07-22 18:21:28.971139] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:17.141 [2024-07-22 18:21:28.971154] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:17.141 [2024-07-22 18:21:28.971406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:17.141 [2024-07-22 18:21:28.971560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:17.141 [2024-07-22 18:21:28.972240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:17.141 [2024-07-22 18:21:28.972243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:17.399 [2024-07-22 18:21:29.179260] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:17.657 18:21:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:17.657 18:21:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@862 -- # return 0 00:14:17.657 18:21:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:17.657 18:21:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:17.657 18:21:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:17.657 18:21:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:17.657 18:21:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:17.958 [2024-07-22 18:21:29.709461] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:17.958 18:21:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:14:18.218 Malloc0 00:14:18.218 18:21:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:14:18.477 18:21:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:18.734 18:21:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:18.991 [2024-07-22 18:21:30.843032] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:18.991 18:21:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:19.249 [2024-07-22 18:21:31.079148] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:19.249 18:21:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid=1e224894-a0fc-4112-b81b-a37606f50c96 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:14:19.249 18:21:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid=1e224894-a0fc-4112-b81b-a37606f50c96 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:14:19.508 18:21:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:14:19.508 18:21:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:14:19.508 18:21:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:19.508 18:21:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:19.508 18:21:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:14:21.409 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:21.409 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:21.409 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:21.409 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:21.409 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:21.409 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:14:21.409 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:14:21.409 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:14:21.409 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:14:21.409 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:14:21.409 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:14:21.409 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:14:21.409 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:14:21.409 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:14:21.409 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:14:21.409 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:14:21.409 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:14:21.409 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:14:21.409 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:14:21.409 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:14:21.409 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:14:21.409 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:21.409 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:14:21.409 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:14:21.409 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:14:21.409 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:14:21.409 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:14:21.409 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:21.409 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:14:21.409 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:21.409 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:14:21.409 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:14:21.409 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=70783 00:14:21.409 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:14:21.409 18:21:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:14:21.409 [global] 00:14:21.409 thread=1 00:14:21.409 invalidate=1 00:14:21.409 rw=randrw 00:14:21.409 time_based=1 00:14:21.409 runtime=6 00:14:21.409 ioengine=libaio 00:14:21.409 direct=1 00:14:21.409 bs=4096 00:14:21.409 iodepth=128 00:14:21.409 norandommap=0 00:14:21.409 numjobs=1 00:14:21.409 00:14:21.409 verify_dump=1 00:14:21.410 verify_backlog=512 00:14:21.410 verify_state_save=0 00:14:21.410 do_verify=1 00:14:21.410 verify=crc32c-intel 00:14:21.410 [job0] 00:14:21.410 filename=/dev/nvme0n1 00:14:21.667 Could not set queue depth (nvme0n1) 00:14:21.667 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:21.667 fio-3.35 00:14:21.667 Starting 1 thread 00:14:22.599 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:14:22.857 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:14:23.184 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:14:23.184 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:14:23.184 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:23.184 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:14:23.184 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:14:23.184 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:14:23.184 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:14:23.184 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:14:23.184 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:23.184 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:14:23.184 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:23.184 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:14:23.184 18:21:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:14:23.442 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:14:23.702 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:14:23.702 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:14:23.702 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:23.702 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:14:23.702 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:14:23.702 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:14:23.702 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:14:23.702 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:14:23.702 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:23.702 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:14:23.702 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:23.702 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:14:23.702 18:21:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 70783 00:14:27.887 00:14:27.887 job0: (groupid=0, jobs=1): err= 0: pid=70804: Mon Jul 22 18:21:39 2024 00:14:27.887 read: IOPS=8171, BW=31.9MiB/s (33.5MB/s)(192MiB/6002msec) 00:14:27.887 slat (usec): min=6, max=7923, avg=73.59, stdev=295.80 00:14:27.887 clat (usec): min=1672, max=19688, avg=10694.84, stdev=1870.44 00:14:27.887 lat (usec): min=2102, max=19704, avg=10768.43, stdev=1873.13 00:14:27.887 clat percentiles (usec): 00:14:27.887 | 1.00th=[ 5342], 5.00th=[ 8225], 10.00th=[ 9110], 20.00th=[ 9765], 00:14:27.887 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10552], 60.00th=[10683], 00:14:27.887 | 70.00th=[10945], 80.00th=[11469], 90.00th=[12256], 95.00th=[15270], 00:14:27.887 | 99.00th=[16909], 99.50th=[17171], 99.90th=[17695], 99.95th=[17957], 00:14:27.887 | 99.99th=[19268] 00:14:27.887 bw ( KiB/s): min= 3888, max=21000, per=54.43%, avg=17790.18, stdev=4843.67, samples=11 00:14:27.887 iops : min= 972, max= 5250, avg=4447.55, stdev=1210.92, samples=11 00:14:27.887 write: IOPS=4798, BW=18.7MiB/s (19.7MB/s)(99.0MiB/5280msec); 0 zone resets 00:14:27.887 slat (usec): min=14, max=3076, avg=82.84, stdev=219.25 00:14:27.887 clat (usec): min=1821, max=19070, avg=9353.66, stdev=1673.65 00:14:27.887 lat (usec): min=1847, max=19094, avg=9436.50, stdev=1681.27 00:14:27.887 clat percentiles (usec): 00:14:27.887 | 1.00th=[ 3982], 5.00th=[ 5473], 10.00th=[ 7701], 20.00th=[ 8717], 00:14:27.887 | 30.00th=[ 9110], 40.00th=[ 9372], 50.00th=[ 9634], 60.00th=[ 9765], 00:14:27.887 | 70.00th=[10028], 80.00th=[10290], 90.00th=[10683], 95.00th=[11076], 00:14:27.887 | 99.00th=[14615], 99.50th=[15401], 99.90th=[16909], 99.95th=[17695], 00:14:27.887 | 99.99th=[18220] 00:14:27.887 bw ( KiB/s): min= 4104, max=20480, per=92.61%, avg=17774.73, stdev=4677.47, samples=11 00:14:27.887 iops : min= 1026, max= 5120, avg=4443.64, stdev=1169.35, samples=11 00:14:27.887 lat (msec) : 2=0.01%, 4=0.43%, 10=39.71%, 20=59.86% 00:14:27.887 cpu : usr=4.87%, sys=18.28%, ctx=4374, majf=0, minf=96 00:14:27.887 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:14:27.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:27.887 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:27.887 issued rwts: total=49046,25335,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:27.887 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:27.887 00:14:27.887 Run status group 0 (all jobs): 00:14:27.887 READ: bw=31.9MiB/s (33.5MB/s), 31.9MiB/s-31.9MiB/s (33.5MB/s-33.5MB/s), io=192MiB (201MB), run=6002-6002msec 00:14:27.887 WRITE: bw=18.7MiB/s (19.7MB/s), 18.7MiB/s-18.7MiB/s (19.7MB/s-19.7MB/s), io=99.0MiB (104MB), run=5280-5280msec 00:14:27.887 00:14:27.887 Disk stats (read/write): 00:14:27.887 nvme0n1: ios=47916/25335, merge=0/0, ticks=494311/224078, in_queue=718389, util=98.51% 00:14:27.887 18:21:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:14:28.145 18:21:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:14:28.434 18:21:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:14:28.434 18:21:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:14:28.434 18:21:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:28.434 18:21:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:14:28.434 18:21:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:14:28.434 18:21:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:14:28.434 18:21:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:14:28.434 18:21:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:14:28.434 18:21:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:28.434 18:21:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:14:28.434 18:21:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:28.434 18:21:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:14:28.434 18:21:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:14:28.434 18:21:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=70884 00:14:28.434 18:21:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:14:28.434 18:21:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:14:28.435 [global] 00:14:28.435 thread=1 00:14:28.435 invalidate=1 00:14:28.435 rw=randrw 00:14:28.435 time_based=1 00:14:28.435 runtime=6 00:14:28.435 ioengine=libaio 00:14:28.435 direct=1 00:14:28.435 bs=4096 00:14:28.435 iodepth=128 00:14:28.435 norandommap=0 00:14:28.435 numjobs=1 00:14:28.435 00:14:28.435 verify_dump=1 00:14:28.435 verify_backlog=512 00:14:28.435 verify_state_save=0 00:14:28.435 do_verify=1 00:14:28.435 verify=crc32c-intel 00:14:28.435 [job0] 00:14:28.435 filename=/dev/nvme0n1 00:14:28.435 Could not set queue depth (nvme0n1) 00:14:28.693 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:28.693 fio-3.35 00:14:28.693 Starting 1 thread 00:14:29.626 18:21:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:14:29.627 18:21:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:14:29.884 18:21:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:14:29.884 18:21:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:14:29.884 18:21:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:29.884 18:21:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:14:29.884 18:21:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:14:29.884 18:21:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:14:29.884 18:21:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:14:29.884 18:21:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:14:29.884 18:21:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:29.884 18:21:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:14:29.884 18:21:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:29.884 18:21:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:14:29.884 18:21:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:14:30.451 18:21:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:14:30.725 18:21:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:14:30.725 18:21:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:14:30.725 18:21:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:30.725 18:21:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:14:30.725 18:21:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:14:30.725 18:21:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:14:30.725 18:21:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:14:30.725 18:21:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:14:30.725 18:21:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:14:30.725 18:21:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:14:30.725 18:21:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:14:30.725 18:21:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:14:30.725 18:21:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 70884 00:14:34.931 00:14:34.931 job0: (groupid=0, jobs=1): err= 0: pid=70905: Mon Jul 22 18:21:46 2024 00:14:34.931 read: IOPS=8659, BW=33.8MiB/s (35.5MB/s)(203MiB/6008msec) 00:14:34.931 slat (usec): min=4, max=11573, avg=62.21, stdev=282.47 00:14:34.931 clat (usec): min=303, max=34811, avg=10359.06, stdev=3960.05 00:14:34.931 lat (usec): min=329, max=34827, avg=10421.27, stdev=3982.01 00:14:34.931 clat percentiles (usec): 00:14:34.931 | 1.00th=[ 1270], 5.00th=[ 4293], 10.00th=[ 5932], 20.00th=[ 7832], 00:14:34.931 | 30.00th=[ 9372], 40.00th=[10028], 50.00th=[10421], 60.00th=[10683], 00:14:34.931 | 70.00th=[11076], 80.00th=[11731], 90.00th=[14222], 95.00th=[17171], 00:14:34.931 | 99.00th=[24773], 99.50th=[27132], 99.90th=[32113], 99.95th=[32637], 00:14:34.931 | 99.99th=[34866] 00:14:34.932 bw ( KiB/s): min= 2000, max=28858, per=54.23%, avg=18784.18, stdev=7944.92, samples=11 00:14:34.932 iops : min= 500, max= 7214, avg=4696.00, stdev=1986.17, samples=11 00:14:34.932 write: IOPS=5243, BW=20.5MiB/s (21.5MB/s)(102MiB/4997msec); 0 zone resets 00:14:34.932 slat (usec): min=12, max=6123, avg=69.24, stdev=199.47 00:14:34.932 clat (usec): min=317, max=34467, avg=8538.07, stdev=3573.61 00:14:34.932 lat (usec): min=378, max=34487, avg=8607.31, stdev=3596.71 00:14:34.932 clat percentiles (usec): 00:14:34.932 | 1.00th=[ 1205], 5.00th=[ 3195], 10.00th=[ 4359], 20.00th=[ 5407], 00:14:34.932 | 30.00th=[ 6456], 40.00th=[ 8455], 50.00th=[ 9110], 60.00th=[ 9503], 00:14:34.932 | 70.00th=[ 9896], 80.00th=[10421], 90.00th=[11863], 95.00th=[13435], 00:14:34.932 | 99.00th=[21890], 99.50th=[23200], 99.90th=[25035], 99.95th=[30278], 00:14:34.932 | 99.99th=[32113] 00:14:34.932 bw ( KiB/s): min= 2040, max=29876, per=89.59%, avg=18792.36, stdev=7919.77, samples=11 00:14:34.932 iops : min= 510, max= 7469, avg=4698.09, stdev=1979.94, samples=11 00:14:34.932 lat (usec) : 500=0.07%, 750=0.24%, 1000=0.31% 00:14:34.932 lat (msec) : 2=1.97%, 4=3.15%, 10=43.90%, 20=47.77%, 50=2.58% 00:14:34.932 cpu : usr=5.76%, sys=20.81%, ctx=4951, majf=0, minf=121 00:14:34.932 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:14:34.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:34.932 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:34.932 issued rwts: total=52026,26203,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:34.932 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:34.932 00:14:34.932 Run status group 0 (all jobs): 00:14:34.932 READ: bw=33.8MiB/s (35.5MB/s), 33.8MiB/s-33.8MiB/s (35.5MB/s-35.5MB/s), io=203MiB (213MB), run=6008-6008msec 00:14:34.932 WRITE: bw=20.5MiB/s (21.5MB/s), 20.5MiB/s-20.5MiB/s (21.5MB/s-21.5MB/s), io=102MiB (107MB), run=4997-4997msec 00:14:34.932 00:14:34.932 Disk stats (read/write): 00:14:34.932 nvme0n1: ios=51269/25862, merge=0/0, ticks=510310/206591, in_queue=716901, util=98.62% 00:14:34.932 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:34.932 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:14:34.932 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:34.932 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:14:34.932 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:34.932 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:34.932 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:34.932 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:34.932 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:14:34.932 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:35.190 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:14:35.190 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:14:35.190 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:14:35.190 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:14:35.190 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:35.190 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:14:35.190 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:35.190 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:14:35.190 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:35.190 18:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:35.190 rmmod nvme_tcp 00:14:35.190 rmmod nvme_fabrics 00:14:35.191 rmmod nvme_keyring 00:14:35.191 18:21:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:35.191 18:21:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:14:35.191 18:21:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:14:35.191 18:21:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 70688 ']' 00:14:35.191 18:21:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 70688 00:14:35.191 18:21:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@948 -- # '[' -z 70688 ']' 00:14:35.191 18:21:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@952 -- # kill -0 70688 00:14:35.191 18:21:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@953 -- # uname 00:14:35.191 18:21:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:35.191 18:21:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 70688 00:14:35.191 18:21:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:35.191 18:21:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:35.191 killing process with pid 70688 00:14:35.191 18:21:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 70688' 00:14:35.191 18:21:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@967 -- # kill 70688 00:14:35.191 18:21:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # wait 70688 00:14:36.565 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:36.565 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:36.565 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:36.565 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:36.565 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:36.565 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:36.565 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:36.565 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:36.565 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:36.565 00:14:36.565 real 0m20.653s 00:14:36.565 user 1m15.817s 00:14:36.565 sys 0m9.053s 00:14:36.565 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:36.565 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:36.565 ************************************ 00:14:36.565 END TEST nvmf_target_multipath 00:14:36.565 ************************************ 00:14:36.565 18:21:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:14:36.565 18:21:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:36.565 18:21:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:36.565 18:21:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:36.565 18:21:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:36.565 ************************************ 00:14:36.565 START TEST nvmf_zcopy 00:14:36.565 ************************************ 00:14:36.565 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:36.824 * Looking for test storage... 00:14:36.824 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:36.824 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:36.824 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:14:36.824 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:36.824 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:36.824 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:36.824 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:36.824 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:36.824 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:36.824 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:36.824 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:36.824 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:36.824 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:36.824 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:14:36.824 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=1e224894-a0fc-4112-b81b-a37606f50c96 00:14:36.824 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:36.824 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:36.824 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:36.824 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:36.824 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:36.824 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:36.824 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:36.824 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:36.824 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.824 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.824 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.824 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:14:36.825 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.825 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:14:36.825 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:36.825 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:36.825 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:36.825 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:36.825 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:36.825 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:36.825 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:36.825 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:36.825 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:14:36.825 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:36.825 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:36.825 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:36.825 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:36.825 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:36.825 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:36.825 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:36.825 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:36.825 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:36.825 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:36.825 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:36.825 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:36.825 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:36.825 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:36.825 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:36.825 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:36.825 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:36.825 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:36.825 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:36.825 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:36.825 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:36.825 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:36.825 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:36.825 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:36.825 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:36.825 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:36.825 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:36.825 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:36.825 Cannot find device "nvmf_tgt_br" 00:14:36.825 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:14:36.825 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:36.825 Cannot find device "nvmf_tgt_br2" 00:14:36.825 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:14:36.825 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:36.825 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:36.825 Cannot find device "nvmf_tgt_br" 00:14:36.825 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:14:36.825 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:36.825 Cannot find device "nvmf_tgt_br2" 00:14:36.825 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:14:36.825 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:36.825 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:36.825 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:36.825 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:36.825 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:14:36.825 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:36.825 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:36.825 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:14:36.825 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:36.825 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:36.825 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:37.084 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:37.084 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:37.084 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:37.084 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:37.084 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:37.084 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:37.084 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:37.084 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:37.084 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:37.084 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:37.084 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:37.084 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:37.084 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:37.084 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:37.084 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:37.084 18:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:37.084 18:21:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:37.084 18:21:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:37.084 18:21:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:37.084 18:21:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:37.084 18:21:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:37.084 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:37.084 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:14:37.084 00:14:37.084 --- 10.0.0.2 ping statistics --- 00:14:37.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:37.084 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:14:37.084 18:21:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:37.084 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:37.084 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:14:37.084 00:14:37.084 --- 10.0.0.3 ping statistics --- 00:14:37.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:37.084 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:14:37.084 18:21:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:37.084 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:37.084 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:14:37.084 00:14:37.084 --- 10.0.0.1 ping statistics --- 00:14:37.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:37.084 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:14:37.084 18:21:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:37.084 18:21:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:14:37.084 18:21:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:37.084 18:21:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:37.084 18:21:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:37.084 18:21:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:37.084 18:21:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:37.084 18:21:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:37.084 18:21:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:37.084 18:21:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:14:37.084 18:21:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:37.084 18:21:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:37.084 18:21:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:37.084 18:21:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=71167 00:14:37.084 18:21:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:37.084 18:21:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 71167 00:14:37.084 18:21:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 71167 ']' 00:14:37.084 18:21:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:37.084 18:21:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:37.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:37.084 18:21:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:37.084 18:21:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:37.084 18:21:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:37.343 [2024-07-22 18:21:49.194426] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:14:37.343 [2024-07-22 18:21:49.194577] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:37.600 [2024-07-22 18:21:49.361864] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.859 [2024-07-22 18:21:49.667976] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:37.859 [2024-07-22 18:21:49.668064] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:37.859 [2024-07-22 18:21:49.668087] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:37.859 [2024-07-22 18:21:49.668105] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:37.859 [2024-07-22 18:21:49.668119] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:37.859 [2024-07-22 18:21:49.668180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:38.117 [2024-07-22 18:21:49.895896] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:38.375 18:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:38.375 18:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:14:38.375 18:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:38.375 18:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:38.375 18:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:38.375 18:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:38.375 18:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:14:38.375 18:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:14:38.375 18:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.375 18:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:38.375 [2024-07-22 18:21:50.191668] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:38.375 18:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.375 18:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:38.375 18:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.375 18:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:38.375 18:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.375 18:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:38.375 18:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.375 18:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:38.375 [2024-07-22 18:21:50.207844] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:38.375 18:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.375 18:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:38.375 18:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.375 18:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:38.375 18:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.375 18:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:14:38.375 18:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.375 18:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:38.375 malloc0 00:14:38.375 18:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.375 18:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:38.375 18:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.375 18:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:38.375 18:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.375 18:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:14:38.375 18:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:14:38.375 18:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:14:38.375 18:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:14:38.375 18:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:38.375 18:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:38.375 { 00:14:38.375 "params": { 00:14:38.375 "name": "Nvme$subsystem", 00:14:38.375 "trtype": "$TEST_TRANSPORT", 00:14:38.375 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:38.375 "adrfam": "ipv4", 00:14:38.375 "trsvcid": "$NVMF_PORT", 00:14:38.375 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:38.375 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:38.375 "hdgst": ${hdgst:-false}, 00:14:38.375 "ddgst": ${ddgst:-false} 00:14:38.375 }, 00:14:38.375 "method": "bdev_nvme_attach_controller" 00:14:38.375 } 00:14:38.375 EOF 00:14:38.375 )") 00:14:38.375 18:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:14:38.375 18:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:14:38.375 18:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:14:38.375 18:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:38.375 "params": { 00:14:38.375 "name": "Nvme1", 00:14:38.375 "trtype": "tcp", 00:14:38.375 "traddr": "10.0.0.2", 00:14:38.375 "adrfam": "ipv4", 00:14:38.375 "trsvcid": "4420", 00:14:38.375 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:38.375 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:38.375 "hdgst": false, 00:14:38.375 "ddgst": false 00:14:38.375 }, 00:14:38.375 "method": "bdev_nvme_attach_controller" 00:14:38.375 }' 00:14:38.633 [2024-07-22 18:21:50.394257] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:14:38.633 [2024-07-22 18:21:50.394457] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71200 ] 00:14:38.633 [2024-07-22 18:21:50.579610] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.891 [2024-07-22 18:21:50.847403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:39.149 [2024-07-22 18:21:51.062600] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:39.407 Running I/O for 10 seconds... 00:14:49.381 00:14:49.381 Latency(us) 00:14:49.381 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:49.381 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:14:49.381 Verification LBA range: start 0x0 length 0x1000 00:14:49.381 Nvme1n1 : 10.02 4367.37 34.12 0.00 0.00 29223.37 4081.11 38844.97 00:14:49.381 =================================================================================================================== 00:14:49.381 Total : 4367.37 34.12 0.00 0.00 29223.37 4081.11 38844.97 00:14:50.755 18:22:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=71334 00:14:50.755 18:22:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:14:50.755 18:22:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:14:50.755 18:22:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:50.755 18:22:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:14:50.755 18:22:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:14:50.755 18:22:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:14:50.755 18:22:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:50.755 18:22:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:50.755 { 00:14:50.755 "params": { 00:14:50.755 "name": "Nvme$subsystem", 00:14:50.755 "trtype": "$TEST_TRANSPORT", 00:14:50.755 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:50.755 "adrfam": "ipv4", 00:14:50.755 "trsvcid": "$NVMF_PORT", 00:14:50.755 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:50.755 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:50.755 "hdgst": ${hdgst:-false}, 00:14:50.755 "ddgst": ${ddgst:-false} 00:14:50.755 }, 00:14:50.755 "method": "bdev_nvme_attach_controller" 00:14:50.755 } 00:14:50.755 EOF 00:14:50.755 )") 00:14:50.755 18:22:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:14:50.755 18:22:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:14:50.755 [2024-07-22 18:22:02.400637] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.755 [2024-07-22 18:22:02.400754] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.755 18:22:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:14:50.755 18:22:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:50.755 "params": { 00:14:50.755 "name": "Nvme1", 00:14:50.755 "trtype": "tcp", 00:14:50.755 "traddr": "10.0.0.2", 00:14:50.755 "adrfam": "ipv4", 00:14:50.755 "trsvcid": "4420", 00:14:50.755 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:50.755 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:50.755 "hdgst": false, 00:14:50.755 "ddgst": false 00:14:50.755 }, 00:14:50.755 "method": "bdev_nvme_attach_controller" 00:14:50.755 }' 00:14:50.755 [2024-07-22 18:22:02.412499] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.755 [2024-07-22 18:22:02.412544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.755 [2024-07-22 18:22:02.424462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.755 [2024-07-22 18:22:02.424513] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.755 [2024-07-22 18:22:02.436477] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.755 [2024-07-22 18:22:02.436521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.755 [2024-07-22 18:22:02.448701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.756 [2024-07-22 18:22:02.448819] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.756 [2024-07-22 18:22:02.460676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.756 [2024-07-22 18:22:02.460796] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.756 [2024-07-22 18:22:02.472709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.756 [2024-07-22 18:22:02.472832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.756 [2024-07-22 18:22:02.484689] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.756 [2024-07-22 18:22:02.484808] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.756 [2024-07-22 18:22:02.495520] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:14:50.756 [2024-07-22 18:22:02.495680] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71334 ] 00:14:50.756 [2024-07-22 18:22:02.496542] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.756 [2024-07-22 18:22:02.496613] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.756 [2024-07-22 18:22:02.508560] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.756 [2024-07-22 18:22:02.508624] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.756 [2024-07-22 18:22:02.520634] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.756 [2024-07-22 18:22:02.520740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.756 [2024-07-22 18:22:02.532708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.756 [2024-07-22 18:22:02.532813] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.756 [2024-07-22 18:22:02.544691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.756 [2024-07-22 18:22:02.544794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.756 [2024-07-22 18:22:02.552592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.756 [2024-07-22 18:22:02.552664] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.756 [2024-07-22 18:22:02.564703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.756 [2024-07-22 18:22:02.564809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.756 [2024-07-22 18:22:02.576557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.756 [2024-07-22 18:22:02.576638] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.756 [2024-07-22 18:22:02.588598] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.756 [2024-07-22 18:22:02.588702] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.756 [2024-07-22 18:22:02.600794] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.756 [2024-07-22 18:22:02.600923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.756 [2024-07-22 18:22:02.612684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.756 [2024-07-22 18:22:02.612788] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.756 [2024-07-22 18:22:02.624685] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.756 [2024-07-22 18:22:02.624773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.756 [2024-07-22 18:22:02.636696] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.756 [2024-07-22 18:22:02.636793] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.756 [2024-07-22 18:22:02.648717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.756 [2024-07-22 18:22:02.648808] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.756 [2024-07-22 18:22:02.660741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.756 [2024-07-22 18:22:02.660876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.756 [2024-07-22 18:22:02.665404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:50.756 [2024-07-22 18:22:02.672652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.756 [2024-07-22 18:22:02.672719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.756 [2024-07-22 18:22:02.684657] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.756 [2024-07-22 18:22:02.684738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.756 [2024-07-22 18:22:02.696789] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.756 [2024-07-22 18:22:02.696896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.756 [2024-07-22 18:22:02.708719] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.756 [2024-07-22 18:22:02.708823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.756 [2024-07-22 18:22:02.720785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.756 [2024-07-22 18:22:02.720898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.756 [2024-07-22 18:22:02.732783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.756 [2024-07-22 18:22:02.732890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.756 [2024-07-22 18:22:02.744719] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.756 [2024-07-22 18:22:02.744815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.756 [2024-07-22 18:22:02.756721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.756 [2024-07-22 18:22:02.756808] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.756 [2024-07-22 18:22:02.768693] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.756 [2024-07-22 18:22:02.768747] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.014 [2024-07-22 18:22:02.780728] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.014 [2024-07-22 18:22:02.780827] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.014 [2024-07-22 18:22:02.792879] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.014 [2024-07-22 18:22:02.792989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.014 [2024-07-22 18:22:02.804829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.014 [2024-07-22 18:22:02.804932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.014 [2024-07-22 18:22:02.816832] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.014 [2024-07-22 18:22:02.816946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.014 [2024-07-22 18:22:02.828736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.014 [2024-07-22 18:22:02.828808] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.014 [2024-07-22 18:22:02.840675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.014 [2024-07-22 18:22:02.840729] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.014 [2024-07-22 18:22:02.852864] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.015 [2024-07-22 18:22:02.852977] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.015 [2024-07-22 18:22:02.864891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.015 [2024-07-22 18:22:02.864999] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.015 [2024-07-22 18:22:02.876829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.015 [2024-07-22 18:22:02.876946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.015 [2024-07-22 18:22:02.888863] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.015 [2024-07-22 18:22:02.888966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.015 [2024-07-22 18:22:02.900752] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.015 [2024-07-22 18:22:02.900826] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.015 [2024-07-22 18:22:02.912871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.015 [2024-07-22 18:22:02.912971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.015 [2024-07-22 18:22:02.917055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.015 [2024-07-22 18:22:02.924901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.015 [2024-07-22 18:22:02.925018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.015 [2024-07-22 18:22:02.936974] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.015 [2024-07-22 18:22:02.937090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.015 [2024-07-22 18:22:02.948923] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.015 [2024-07-22 18:22:02.949042] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.015 [2024-07-22 18:22:02.960903] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.015 [2024-07-22 18:22:02.961007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.015 [2024-07-22 18:22:02.972747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.015 [2024-07-22 18:22:02.972812] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.015 [2024-07-22 18:22:02.984930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.015 [2024-07-22 18:22:02.985044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.015 [2024-07-22 18:22:02.996909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.015 [2024-07-22 18:22:02.997041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.015 [2024-07-22 18:22:03.008981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.015 [2024-07-22 18:22:03.009097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.015 [2024-07-22 18:22:03.020963] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.015 [2024-07-22 18:22:03.021083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.273 [2024-07-22 18:22:03.032844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.273 [2024-07-22 18:22:03.032949] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.273 [2024-07-22 18:22:03.044830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.273 [2024-07-22 18:22:03.044890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.273 [2024-07-22 18:22:03.056810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.273 [2024-07-22 18:22:03.056870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.273 [2024-07-22 18:22:03.068769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.273 [2024-07-22 18:22:03.068828] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.273 [2024-07-22 18:22:03.080857] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.273 [2024-07-22 18:22:03.080914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.273 [2024-07-22 18:22:03.092788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.273 [2024-07-22 18:22:03.092850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.273 [2024-07-22 18:22:03.104924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.273 [2024-07-22 18:22:03.105017] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.273 [2024-07-22 18:22:03.116962] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.273 [2024-07-22 18:22:03.117069] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.273 [2024-07-22 18:22:03.128800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.273 [2024-07-22 18:22:03.128853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.273 [2024-07-22 18:22:03.140918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.273 [2024-07-22 18:22:03.140980] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.273 [2024-07-22 18:22:03.152816] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.273 [2024-07-22 18:22:03.152868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.273 [2024-07-22 18:22:03.164808] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.273 [2024-07-22 18:22:03.164865] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.273 [2024-07-22 18:22:03.170940] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:51.273 [2024-07-22 18:22:03.176883] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.273 [2024-07-22 18:22:03.176940] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.273 [2024-07-22 18:22:03.188951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.273 [2024-07-22 18:22:03.189057] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.273 [2024-07-22 18:22:03.200884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.273 [2024-07-22 18:22:03.200942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.273 [2024-07-22 18:22:03.212843] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.273 [2024-07-22 18:22:03.212899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.273 [2024-07-22 18:22:03.224867] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.273 [2024-07-22 18:22:03.224908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.273 [2024-07-22 18:22:03.236861] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.273 [2024-07-22 18:22:03.236914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.273 [2024-07-22 18:22:03.248853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.273 [2024-07-22 18:22:03.248895] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.273 [2024-07-22 18:22:03.260837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.273 [2024-07-22 18:22:03.260882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.273 [2024-07-22 18:22:03.272877] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.273 [2024-07-22 18:22:03.272917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.273 [2024-07-22 18:22:03.284845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.273 [2024-07-22 18:22:03.284895] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.545 [2024-07-22 18:22:03.296913] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.545 [2024-07-22 18:22:03.296964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.545 [2024-07-22 18:22:03.308963] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.545 [2024-07-22 18:22:03.309007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.545 [2024-07-22 18:22:03.321027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.545 [2024-07-22 18:22:03.321122] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.545 [2024-07-22 18:22:03.333187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.545 [2024-07-22 18:22:03.333269] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.545 [2024-07-22 18:22:03.345117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.545 [2024-07-22 18:22:03.345170] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.545 [2024-07-22 18:22:03.357097] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.545 [2024-07-22 18:22:03.357155] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.545 [2024-07-22 18:22:03.369157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.545 [2024-07-22 18:22:03.369228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.545 Running I/O for 5 seconds... 00:14:51.545 [2024-07-22 18:22:03.392910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.545 [2024-07-22 18:22:03.393038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.545 [2024-07-22 18:22:03.410044] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.545 [2024-07-22 18:22:03.410106] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.545 [2024-07-22 18:22:03.429236] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.545 [2024-07-22 18:22:03.429288] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.545 [2024-07-22 18:22:03.445116] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.545 [2024-07-22 18:22:03.445173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.545 [2024-07-22 18:22:03.462091] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.545 [2024-07-22 18:22:03.462145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.545 [2024-07-22 18:22:03.479330] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.545 [2024-07-22 18:22:03.479377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.545 [2024-07-22 18:22:03.496886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.545 [2024-07-22 18:22:03.496973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.545 [2024-07-22 18:22:03.514792] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.545 [2024-07-22 18:22:03.514889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.545 [2024-07-22 18:22:03.532104] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.545 [2024-07-22 18:22:03.532188] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.545 [2024-07-22 18:22:03.550071] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.545 [2024-07-22 18:22:03.550119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.803 [2024-07-22 18:22:03.564555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.803 [2024-07-22 18:22:03.564607] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.803 [2024-07-22 18:22:03.584089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.803 [2024-07-22 18:22:03.584146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.803 [2024-07-22 18:22:03.599738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.803 [2024-07-22 18:22:03.599823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.803 [2024-07-22 18:22:03.616688] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.803 [2024-07-22 18:22:03.616745] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.803 [2024-07-22 18:22:03.635109] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.803 [2024-07-22 18:22:03.635169] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.803 [2024-07-22 18:22:03.650429] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.803 [2024-07-22 18:22:03.650474] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.803 [2024-07-22 18:22:03.667720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.803 [2024-07-22 18:22:03.667775] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.803 [2024-07-22 18:22:03.684390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.803 [2024-07-22 18:22:03.684446] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.803 [2024-07-22 18:22:03.700955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.803 [2024-07-22 18:22:03.701015] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.803 [2024-07-22 18:22:03.717229] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.803 [2024-07-22 18:22:03.717286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.803 [2024-07-22 18:22:03.733332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.803 [2024-07-22 18:22:03.733394] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.803 [2024-07-22 18:22:03.748959] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.803 [2024-07-22 18:22:03.749028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.803 [2024-07-22 18:22:03.768349] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.803 [2024-07-22 18:22:03.768426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.803 [2024-07-22 18:22:03.783046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.803 [2024-07-22 18:22:03.783100] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.803 [2024-07-22 18:22:03.803073] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.803 [2024-07-22 18:22:03.803144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.803 [2024-07-22 18:22:03.819275] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.803 [2024-07-22 18:22:03.819325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.061 [2024-07-22 18:22:03.839572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.061 [2024-07-22 18:22:03.839630] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.061 [2024-07-22 18:22:03.855963] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.061 [2024-07-22 18:22:03.856056] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.061 [2024-07-22 18:22:03.874423] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.061 [2024-07-22 18:22:03.874486] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.061 [2024-07-22 18:22:03.889672] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.061 [2024-07-22 18:22:03.889716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.061 [2024-07-22 18:22:03.905188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.061 [2024-07-22 18:22:03.905251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.061 [2024-07-22 18:22:03.920992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.061 [2024-07-22 18:22:03.921048] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.061 [2024-07-22 18:22:03.939069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.061 [2024-07-22 18:22:03.939246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.061 [2024-07-22 18:22:03.963815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.061 [2024-07-22 18:22:03.963890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.061 [2024-07-22 18:22:03.981821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.061 [2024-07-22 18:22:03.981885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.061 [2024-07-22 18:22:04.000562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.061 [2024-07-22 18:22:04.000623] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.061 [2024-07-22 18:22:04.019098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.061 [2024-07-22 18:22:04.019162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.061 [2024-07-22 18:22:04.037785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.061 [2024-07-22 18:22:04.037832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.061 [2024-07-22 18:22:04.056642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.061 [2024-07-22 18:22:04.056696] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.061 [2024-07-22 18:22:04.075672] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.061 [2024-07-22 18:22:04.075734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.319 [2024-07-22 18:22:04.093904] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.319 [2024-07-22 18:22:04.093956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.319 [2024-07-22 18:22:04.112927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.319 [2024-07-22 18:22:04.112978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.319 [2024-07-22 18:22:04.130548] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.319 [2024-07-22 18:22:04.130600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.319 [2024-07-22 18:22:04.148900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.319 [2024-07-22 18:22:04.148960] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.319 [2024-07-22 18:22:04.167070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.319 [2024-07-22 18:22:04.167124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.319 [2024-07-22 18:22:04.184276] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.319 [2024-07-22 18:22:04.184327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.319 [2024-07-22 18:22:04.202295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.319 [2024-07-22 18:22:04.202362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.319 [2024-07-22 18:22:04.221178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.319 [2024-07-22 18:22:04.221254] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.319 [2024-07-22 18:22:04.240115] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.319 [2024-07-22 18:22:04.240183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.319 [2024-07-22 18:22:04.258403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.319 [2024-07-22 18:22:04.258450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.319 [2024-07-22 18:22:04.276526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.319 [2024-07-22 18:22:04.276589] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.319 [2024-07-22 18:22:04.294588] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.319 [2024-07-22 18:22:04.294651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.319 [2024-07-22 18:22:04.311778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.319 [2024-07-22 18:22:04.311833] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.319 [2024-07-22 18:22:04.329937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.319 [2024-07-22 18:22:04.329991] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.577 [2024-07-22 18:22:04.348426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.577 [2024-07-22 18:22:04.348484] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.577 [2024-07-22 18:22:04.366006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.577 [2024-07-22 18:22:04.366067] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.577 [2024-07-22 18:22:04.383007] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.577 [2024-07-22 18:22:04.383053] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.577 [2024-07-22 18:22:04.400020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.577 [2024-07-22 18:22:04.400060] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.577 [2024-07-22 18:22:04.416147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.577 [2024-07-22 18:22:04.416193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.577 [2024-07-22 18:22:04.433002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.577 [2024-07-22 18:22:04.433043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.577 [2024-07-22 18:22:04.445593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.577 [2024-07-22 18:22:04.445654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.577 [2024-07-22 18:22:04.463833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.577 [2024-07-22 18:22:04.463874] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.577 [2024-07-22 18:22:04.477481] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.577 [2024-07-22 18:22:04.477526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.577 [2024-07-22 18:22:04.494272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.577 [2024-07-22 18:22:04.494327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.577 [2024-07-22 18:22:04.507810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.577 [2024-07-22 18:22:04.507857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.577 [2024-07-22 18:22:04.525710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.577 [2024-07-22 18:22:04.525750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.577 [2024-07-22 18:22:04.543235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.577 [2024-07-22 18:22:04.543295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.577 [2024-07-22 18:22:04.559429] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.578 [2024-07-22 18:22:04.559472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.578 [2024-07-22 18:22:04.571207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.578 [2024-07-22 18:22:04.571279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.578 [2024-07-22 18:22:04.588949] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.578 [2024-07-22 18:22:04.589006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.835 [2024-07-22 18:22:04.603758] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.835 [2024-07-22 18:22:04.603824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.835 [2024-07-22 18:22:04.621231] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.835 [2024-07-22 18:22:04.621273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.835 [2024-07-22 18:22:04.637317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.835 [2024-07-22 18:22:04.637364] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.835 [2024-07-22 18:22:04.649774] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.835 [2024-07-22 18:22:04.649815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.835 [2024-07-22 18:22:04.662961] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.835 [2024-07-22 18:22:04.663005] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.835 [2024-07-22 18:22:04.676776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.835 [2024-07-22 18:22:04.676832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.835 [2024-07-22 18:22:04.694677] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.835 [2024-07-22 18:22:04.694720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.835 [2024-07-22 18:22:04.711922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.835 [2024-07-22 18:22:04.711964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.835 [2024-07-22 18:22:04.725106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.835 [2024-07-22 18:22:04.725159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.835 [2024-07-22 18:22:04.743566] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.835 [2024-07-22 18:22:04.743608] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.835 [2024-07-22 18:22:04.760872] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.835 [2024-07-22 18:22:04.760927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.835 [2024-07-22 18:22:04.776167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.835 [2024-07-22 18:22:04.776223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.835 [2024-07-22 18:22:04.788858] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.835 [2024-07-22 18:22:04.788912] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.835 [2024-07-22 18:22:04.807258] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.835 [2024-07-22 18:22:04.807300] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.835 [2024-07-22 18:22:04.824084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.835 [2024-07-22 18:22:04.824129] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:52.835 [2024-07-22 18:22:04.836405] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:52.835 [2024-07-22 18:22:04.836446] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.093 [2024-07-22 18:22:04.855339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.093 [2024-07-22 18:22:04.855381] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.093 [2024-07-22 18:22:04.868850] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.093 [2024-07-22 18:22:04.868896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.093 [2024-07-22 18:22:04.886791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.093 [2024-07-22 18:22:04.886837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.093 [2024-07-22 18:22:04.900788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.093 [2024-07-22 18:22:04.900830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.093 [2024-07-22 18:22:04.915855] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.093 [2024-07-22 18:22:04.915898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.093 [2024-07-22 18:22:04.935551] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.093 [2024-07-22 18:22:04.935595] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.093 [2024-07-22 18:22:04.949448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.093 [2024-07-22 18:22:04.949489] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.093 [2024-07-22 18:22:04.966617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.093 [2024-07-22 18:22:04.966661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.093 [2024-07-22 18:22:04.982894] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.093 [2024-07-22 18:22:04.982936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.093 [2024-07-22 18:22:04.995217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.093 [2024-07-22 18:22:04.995257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.093 [2024-07-22 18:22:05.012716] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.093 [2024-07-22 18:22:05.012758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.093 [2024-07-22 18:22:05.028957] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.093 [2024-07-22 18:22:05.028999] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.093 [2024-07-22 18:22:05.041632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.093 [2024-07-22 18:22:05.041672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.093 [2024-07-22 18:22:05.059480] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.093 [2024-07-22 18:22:05.059534] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.093 [2024-07-22 18:22:05.073644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.093 [2024-07-22 18:22:05.073684] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.093 [2024-07-22 18:22:05.091221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.093 [2024-07-22 18:22:05.091276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.093 [2024-07-22 18:22:05.107152] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.093 [2024-07-22 18:22:05.107225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.352 [2024-07-22 18:22:05.120344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.352 [2024-07-22 18:22:05.120385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.352 [2024-07-22 18:22:05.139975] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.352 [2024-07-22 18:22:05.140018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.352 [2024-07-22 18:22:05.156684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.352 [2024-07-22 18:22:05.156725] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.352 [2024-07-22 18:22:05.169385] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.352 [2024-07-22 18:22:05.169451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.352 [2024-07-22 18:22:05.188302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.352 [2024-07-22 18:22:05.188343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.352 [2024-07-22 18:22:05.205829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.352 [2024-07-22 18:22:05.205870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.352 [2024-07-22 18:22:05.218338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.352 [2024-07-22 18:22:05.218391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.352 [2024-07-22 18:22:05.237740] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.352 [2024-07-22 18:22:05.237781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.352 [2024-07-22 18:22:05.251435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.352 [2024-07-22 18:22:05.251485] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.352 [2024-07-22 18:22:05.268272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.352 [2024-07-22 18:22:05.268312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.352 [2024-07-22 18:22:05.285007] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.352 [2024-07-22 18:22:05.285063] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.352 [2024-07-22 18:22:05.298533] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.352 [2024-07-22 18:22:05.298576] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.352 [2024-07-22 18:22:05.316907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.352 [2024-07-22 18:22:05.316954] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.352 [2024-07-22 18:22:05.330060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.352 [2024-07-22 18:22:05.330109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.352 [2024-07-22 18:22:05.348229] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.352 [2024-07-22 18:22:05.348270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.352 [2024-07-22 18:22:05.361828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.352 [2024-07-22 18:22:05.361870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.610 [2024-07-22 18:22:05.380165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.610 [2024-07-22 18:22:05.380225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.610 [2024-07-22 18:22:05.397310] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.610 [2024-07-22 18:22:05.397359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.610 [2024-07-22 18:22:05.413723] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.610 [2024-07-22 18:22:05.413774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.610 [2024-07-22 18:22:05.433919] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.610 [2024-07-22 18:22:05.433965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.610 [2024-07-22 18:22:05.452474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.610 [2024-07-22 18:22:05.452548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.610 [2024-07-22 18:22:05.471201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.610 [2024-07-22 18:22:05.471268] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.610 [2024-07-22 18:22:05.490239] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.610 [2024-07-22 18:22:05.490312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.611 [2024-07-22 18:22:05.514643] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.611 [2024-07-22 18:22:05.514735] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.611 [2024-07-22 18:22:05.531496] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.611 [2024-07-22 18:22:05.531542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.611 [2024-07-22 18:22:05.546142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.611 [2024-07-22 18:22:05.546188] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.611 [2024-07-22 18:22:05.564081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.611 [2024-07-22 18:22:05.564127] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.611 [2024-07-22 18:22:05.577047] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.611 [2024-07-22 18:22:05.577092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.611 [2024-07-22 18:22:05.596674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.611 [2024-07-22 18:22:05.596720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.611 [2024-07-22 18:22:05.613436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.611 [2024-07-22 18:22:05.613479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.611 [2024-07-22 18:22:05.626179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.611 [2024-07-22 18:22:05.626234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.869 [2024-07-22 18:22:05.644813] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.869 [2024-07-22 18:22:05.644859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.869 [2024-07-22 18:22:05.661986] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.869 [2024-07-22 18:22:05.662031] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.869 [2024-07-22 18:22:05.674940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.869 [2024-07-22 18:22:05.674984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.869 [2024-07-22 18:22:05.694368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.869 [2024-07-22 18:22:05.694410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.869 [2024-07-22 18:22:05.710822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.869 [2024-07-22 18:22:05.710866] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.869 [2024-07-22 18:22:05.726666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.869 [2024-07-22 18:22:05.726710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.869 [2024-07-22 18:22:05.742531] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.869 [2024-07-22 18:22:05.742573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.869 [2024-07-22 18:22:05.755124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.869 [2024-07-22 18:22:05.755168] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.869 [2024-07-22 18:22:05.773046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.869 [2024-07-22 18:22:05.773124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.869 [2024-07-22 18:22:05.790737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.869 [2024-07-22 18:22:05.790782] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.869 [2024-07-22 18:22:05.803733] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.869 [2024-07-22 18:22:05.803777] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.869 [2024-07-22 18:22:05.822006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.869 [2024-07-22 18:22:05.822051] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.869 [2024-07-22 18:22:05.835745] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.869 [2024-07-22 18:22:05.835791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.869 [2024-07-22 18:22:05.853803] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.869 [2024-07-22 18:22:05.853849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.869 [2024-07-22 18:22:05.867888] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.870 [2024-07-22 18:22:05.867933] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:53.870 [2024-07-22 18:22:05.885552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:53.870 [2024-07-22 18:22:05.885596] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.129 [2024-07-22 18:22:05.899782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.129 [2024-07-22 18:22:05.899826] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.129 [2024-07-22 18:22:05.916995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.129 [2024-07-22 18:22:05.917040] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.129 [2024-07-22 18:22:05.930519] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.129 [2024-07-22 18:22:05.930562] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.129 [2024-07-22 18:22:05.948568] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.129 [2024-07-22 18:22:05.948614] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.129 [2024-07-22 18:22:05.964603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.129 [2024-07-22 18:22:05.964648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.129 [2024-07-22 18:22:05.981750] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.129 [2024-07-22 18:22:05.981805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.129 [2024-07-22 18:22:05.997752] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.129 [2024-07-22 18:22:05.997795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.129 [2024-07-22 18:22:06.013532] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.129 [2024-07-22 18:22:06.013591] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.129 [2024-07-22 18:22:06.026456] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.129 [2024-07-22 18:22:06.026500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.129 [2024-07-22 18:22:06.045267] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.129 [2024-07-22 18:22:06.045315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.129 [2024-07-22 18:22:06.059600] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.129 [2024-07-22 18:22:06.059647] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.129 [2024-07-22 18:22:06.077354] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.129 [2024-07-22 18:22:06.077414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.129 [2024-07-22 18:22:06.090479] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.129 [2024-07-22 18:22:06.090524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.129 [2024-07-22 18:22:06.109375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.129 [2024-07-22 18:22:06.109461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.129 [2024-07-22 18:22:06.123462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.129 [2024-07-22 18:22:06.123506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.129 [2024-07-22 18:22:06.138650] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.129 [2024-07-22 18:22:06.138699] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.388 [2024-07-22 18:22:06.156842] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.388 [2024-07-22 18:22:06.156896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.388 [2024-07-22 18:22:06.169788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.388 [2024-07-22 18:22:06.169832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.388 [2024-07-22 18:22:06.188030] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.388 [2024-07-22 18:22:06.188075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.388 [2024-07-22 18:22:06.200816] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.388 [2024-07-22 18:22:06.200860] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.388 [2024-07-22 18:22:06.219656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.388 [2024-07-22 18:22:06.219700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.388 [2024-07-22 18:22:06.236136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.388 [2024-07-22 18:22:06.236196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.388 [2024-07-22 18:22:06.248745] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.388 [2024-07-22 18:22:06.248805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.388 [2024-07-22 18:22:06.267301] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.388 [2024-07-22 18:22:06.267356] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.388 [2024-07-22 18:22:06.281352] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.388 [2024-07-22 18:22:06.281397] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.388 [2024-07-22 18:22:06.296788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.388 [2024-07-22 18:22:06.296833] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.388 [2024-07-22 18:22:06.314882] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.388 [2024-07-22 18:22:06.314932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.388 [2024-07-22 18:22:06.331038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.388 [2024-07-22 18:22:06.331083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.388 [2024-07-22 18:22:06.347501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.388 [2024-07-22 18:22:06.347546] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.388 [2024-07-22 18:22:06.360382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.388 [2024-07-22 18:22:06.360426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.388 [2024-07-22 18:22:06.379862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.388 [2024-07-22 18:22:06.379908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.388 [2024-07-22 18:22:06.394580] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.388 [2024-07-22 18:22:06.394626] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.647 [2024-07-22 18:22:06.412453] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.647 [2024-07-22 18:22:06.412499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.647 [2024-07-22 18:22:06.426507] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.647 [2024-07-22 18:22:06.426551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.647 [2024-07-22 18:22:06.444674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.647 [2024-07-22 18:22:06.444719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.647 [2024-07-22 18:22:06.459159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.647 [2024-07-22 18:22:06.459217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.647 [2024-07-22 18:22:06.476688] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.647 [2024-07-22 18:22:06.476732] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.647 [2024-07-22 18:22:06.492724] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.647 [2024-07-22 18:22:06.492768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.647 [2024-07-22 18:22:06.505861] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.647 [2024-07-22 18:22:06.505907] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.647 [2024-07-22 18:22:06.525559] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.647 [2024-07-22 18:22:06.525606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.647 [2024-07-22 18:22:06.542803] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.647 [2024-07-22 18:22:06.542849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.647 [2024-07-22 18:22:06.559412] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.647 [2024-07-22 18:22:06.559458] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.647 [2024-07-22 18:22:06.571992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.647 [2024-07-22 18:22:06.572036] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.647 [2024-07-22 18:22:06.590900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.647 [2024-07-22 18:22:06.590960] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.647 [2024-07-22 18:22:06.608524] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.647 [2024-07-22 18:22:06.608568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.647 [2024-07-22 18:22:06.625510] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.647 [2024-07-22 18:22:06.625555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.647 [2024-07-22 18:22:06.638401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.647 [2024-07-22 18:22:06.638446] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.647 [2024-07-22 18:22:06.657296] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.647 [2024-07-22 18:22:06.657345] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.906 [2024-07-22 18:22:06.674477] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.906 [2024-07-22 18:22:06.674524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.906 [2024-07-22 18:22:06.691434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.906 [2024-07-22 18:22:06.691480] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.906 [2024-07-22 18:22:06.704019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.906 [2024-07-22 18:22:06.704063] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.906 [2024-07-22 18:22:06.721607] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.906 [2024-07-22 18:22:06.721672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.906 [2024-07-22 18:22:06.737836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.906 [2024-07-22 18:22:06.737881] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.906 [2024-07-22 18:22:06.755973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.906 [2024-07-22 18:22:06.756018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.906 [2024-07-22 18:22:06.771466] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.906 [2024-07-22 18:22:06.771509] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.906 [2024-07-22 18:22:06.784487] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.906 [2024-07-22 18:22:06.784530] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.906 [2024-07-22 18:22:06.803041] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.906 [2024-07-22 18:22:06.803101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.906 [2024-07-22 18:22:06.819172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.906 [2024-07-22 18:22:06.819250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.906 [2024-07-22 18:22:06.834777] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.906 [2024-07-22 18:22:06.834835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.906 [2024-07-22 18:22:06.847500] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.906 [2024-07-22 18:22:06.847544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.906 [2024-07-22 18:22:06.865868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.906 [2024-07-22 18:22:06.865912] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.906 [2024-07-22 18:22:06.879692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.906 [2024-07-22 18:22:06.879736] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.906 [2024-07-22 18:22:06.897081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.906 [2024-07-22 18:22:06.897142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.906 [2024-07-22 18:22:06.910817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:54.906 [2024-07-22 18:22:06.910876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.166 [2024-07-22 18:22:06.925560] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.166 [2024-07-22 18:22:06.925605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.166 [2024-07-22 18:22:06.942788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.166 [2024-07-22 18:22:06.942845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.166 [2024-07-22 18:22:06.956240] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.166 [2024-07-22 18:22:06.956282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.166 [2024-07-22 18:22:06.974714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.166 [2024-07-22 18:22:06.974757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.166 [2024-07-22 18:22:06.991370] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.166 [2024-07-22 18:22:06.991413] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.166 [2024-07-22 18:22:07.005070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.166 [2024-07-22 18:22:07.005119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.166 [2024-07-22 18:22:07.023989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.166 [2024-07-22 18:22:07.024035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.166 [2024-07-22 18:22:07.037522] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.166 [2024-07-22 18:22:07.037567] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.166 [2024-07-22 18:22:07.054377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.166 [2024-07-22 18:22:07.054421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.166 [2024-07-22 18:22:07.067534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.166 [2024-07-22 18:22:07.067578] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.166 [2024-07-22 18:22:07.086026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.166 [2024-07-22 18:22:07.086076] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.166 [2024-07-22 18:22:07.102574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.166 [2024-07-22 18:22:07.102621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.166 [2024-07-22 18:22:07.118117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.166 [2024-07-22 18:22:07.118163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.166 [2024-07-22 18:22:07.133968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.166 [2024-07-22 18:22:07.134027] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.166 [2024-07-22 18:22:07.151349] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.166 [2024-07-22 18:22:07.151396] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.166 [2024-07-22 18:22:07.164499] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.166 [2024-07-22 18:22:07.164545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.425 [2024-07-22 18:22:07.182830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.425 [2024-07-22 18:22:07.182879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.425 [2024-07-22 18:22:07.195618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.425 [2024-07-22 18:22:07.195663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.425 [2024-07-22 18:22:07.214970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.425 [2024-07-22 18:22:07.215014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.425 [2024-07-22 18:22:07.229301] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.425 [2024-07-22 18:22:07.229344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.425 [2024-07-22 18:22:07.247053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.425 [2024-07-22 18:22:07.247101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.425 [2024-07-22 18:22:07.264102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.425 [2024-07-22 18:22:07.264147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.425 [2024-07-22 18:22:07.276614] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.425 [2024-07-22 18:22:07.276660] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.425 [2024-07-22 18:22:07.295287] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.425 [2024-07-22 18:22:07.295333] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.425 [2024-07-22 18:22:07.312708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.425 [2024-07-22 18:22:07.312753] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.425 [2024-07-22 18:22:07.328750] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.425 [2024-07-22 18:22:07.328794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.425 [2024-07-22 18:22:07.341279] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.425 [2024-07-22 18:22:07.341322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.425 [2024-07-22 18:22:07.359533] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.425 [2024-07-22 18:22:07.359578] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.425 [2024-07-22 18:22:07.373807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.426 [2024-07-22 18:22:07.373851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.426 [2024-07-22 18:22:07.390714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.426 [2024-07-22 18:22:07.390773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.426 [2024-07-22 18:22:07.406850] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.426 [2024-07-22 18:22:07.406910] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.426 [2024-07-22 18:22:07.424601] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.426 [2024-07-22 18:22:07.424646] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.426 [2024-07-22 18:22:07.440347] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.426 [2024-07-22 18:22:07.440393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.684 [2024-07-22 18:22:07.456529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.684 [2024-07-22 18:22:07.456576] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.684 [2024-07-22 18:22:07.472575] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.684 [2024-07-22 18:22:07.472623] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.684 [2024-07-22 18:22:07.485314] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.684 [2024-07-22 18:22:07.485360] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.684 [2024-07-22 18:22:07.504746] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.684 [2024-07-22 18:22:07.504795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.684 [2024-07-22 18:22:07.519449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.684 [2024-07-22 18:22:07.519496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.684 [2024-07-22 18:22:07.536701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.684 [2024-07-22 18:22:07.536748] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.684 [2024-07-22 18:22:07.552792] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.684 [2024-07-22 18:22:07.552839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.684 [2024-07-22 18:22:07.564457] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.684 [2024-07-22 18:22:07.564501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.684 [2024-07-22 18:22:07.580196] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.684 [2024-07-22 18:22:07.580252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.684 [2024-07-22 18:22:07.597643] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.684 [2024-07-22 18:22:07.597687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.684 [2024-07-22 18:22:07.614846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.684 [2024-07-22 18:22:07.614890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.684 [2024-07-22 18:22:07.631533] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.684 [2024-07-22 18:22:07.631579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.684 [2024-07-22 18:22:07.647634] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.684 [2024-07-22 18:22:07.647678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.684 [2024-07-22 18:22:07.664596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.684 [2024-07-22 18:22:07.664658] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.684 [2024-07-22 18:22:07.677888] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.684 [2024-07-22 18:22:07.677932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.684 [2024-07-22 18:22:07.696933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.684 [2024-07-22 18:22:07.696978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.942 [2024-07-22 18:22:07.711340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.942 [2024-07-22 18:22:07.711384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.942 [2024-07-22 18:22:07.728838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.942 [2024-07-22 18:22:07.728883] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.943 [2024-07-22 18:22:07.746392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.943 [2024-07-22 18:22:07.746438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.943 [2024-07-22 18:22:07.759449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.943 [2024-07-22 18:22:07.759505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.943 [2024-07-22 18:22:07.779007] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.943 [2024-07-22 18:22:07.779063] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.943 [2024-07-22 18:22:07.793262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.943 [2024-07-22 18:22:07.793312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.943 [2024-07-22 18:22:07.811486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.943 [2024-07-22 18:22:07.811530] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.943 [2024-07-22 18:22:07.829263] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.943 [2024-07-22 18:22:07.829312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.943 [2024-07-22 18:22:07.846182] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.943 [2024-07-22 18:22:07.846243] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.943 [2024-07-22 18:22:07.858989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.943 [2024-07-22 18:22:07.859036] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.943 [2024-07-22 18:22:07.877546] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.943 [2024-07-22 18:22:07.877593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.943 [2024-07-22 18:22:07.891931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.943 [2024-07-22 18:22:07.891977] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.943 [2024-07-22 18:22:07.909492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.943 [2024-07-22 18:22:07.909542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.943 [2024-07-22 18:22:07.925666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.943 [2024-07-22 18:22:07.925712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.943 [2024-07-22 18:22:07.938382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.943 [2024-07-22 18:22:07.938425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:55.943 [2024-07-22 18:22:07.956673] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:55.943 [2024-07-22 18:22:07.956717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.201 [2024-07-22 18:22:07.972930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.201 [2024-07-22 18:22:07.972974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.201 [2024-07-22 18:22:07.989411] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.201 [2024-07-22 18:22:07.989456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.201 [2024-07-22 18:22:08.001949] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.201 [2024-07-22 18:22:08.001997] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.201 [2024-07-22 18:22:08.020745] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.201 [2024-07-22 18:22:08.020792] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.201 [2024-07-22 18:22:08.034520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.201 [2024-07-22 18:22:08.034565] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.201 [2024-07-22 18:22:08.051851] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.201 [2024-07-22 18:22:08.051897] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.201 [2024-07-22 18:22:08.065225] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.201 [2024-07-22 18:22:08.065268] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.201 [2024-07-22 18:22:08.083589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.201 [2024-07-22 18:22:08.083636] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.201 [2024-07-22 18:22:08.097272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.201 [2024-07-22 18:22:08.097318] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.201 [2024-07-22 18:22:08.114569] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.201 [2024-07-22 18:22:08.114613] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.201 [2024-07-22 18:22:08.128550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.201 [2024-07-22 18:22:08.128594] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.201 [2024-07-22 18:22:08.146925] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.201 [2024-07-22 18:22:08.146974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.201 [2024-07-22 18:22:08.161939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.201 [2024-07-22 18:22:08.162011] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.201 [2024-07-22 18:22:08.180120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.201 [2024-07-22 18:22:08.180166] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.201 [2024-07-22 18:22:08.196558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.201 [2024-07-22 18:22:08.196603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.201 [2024-07-22 18:22:08.208935] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.201 [2024-07-22 18:22:08.208979] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.460 [2024-07-22 18:22:08.227509] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.460 [2024-07-22 18:22:08.227554] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.460 [2024-07-22 18:22:08.244930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.460 [2024-07-22 18:22:08.244975] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.460 [2024-07-22 18:22:08.257943] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.460 [2024-07-22 18:22:08.257988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.460 [2024-07-22 18:22:08.276046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.460 [2024-07-22 18:22:08.276089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.460 [2024-07-22 18:22:08.290365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.460 [2024-07-22 18:22:08.290408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.460 [2024-07-22 18:22:08.307746] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.460 [2024-07-22 18:22:08.307822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.460 [2024-07-22 18:22:08.321712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.460 [2024-07-22 18:22:08.321757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.460 [2024-07-22 18:22:08.339835] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.460 [2024-07-22 18:22:08.339888] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.460 [2024-07-22 18:22:08.353703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.460 [2024-07-22 18:22:08.353746] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.460 [2024-07-22 18:22:08.371681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.460 [2024-07-22 18:22:08.371726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.460 [2024-07-22 18:22:08.384509] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.460 [2024-07-22 18:22:08.384553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.460 00:14:56.460 Latency(us) 00:14:56.460 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:56.460 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:14:56.460 Nvme1n1 : 5.01 8352.62 65.25 0.00 0.00 15301.47 4885.41 34555.35 00:14:56.460 =================================================================================================================== 00:14:56.460 Total : 8352.62 65.25 0.00 0.00 15301.47 4885.41 34555.35 00:14:56.460 [2024-07-22 18:22:08.395401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.460 [2024-07-22 18:22:08.395443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.460 [2024-07-22 18:22:08.407523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.460 [2024-07-22 18:22:08.407565] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.460 [2024-07-22 18:22:08.419542] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.460 [2024-07-22 18:22:08.419584] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.460 [2024-07-22 18:22:08.431614] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.460 [2024-07-22 18:22:08.431673] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.460 [2024-07-22 18:22:08.443598] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.460 [2024-07-22 18:22:08.443650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.460 [2024-07-22 18:22:08.455595] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.460 [2024-07-22 18:22:08.455636] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.460 [2024-07-22 18:22:08.467533] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.460 [2024-07-22 18:22:08.467571] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.719 [2024-07-22 18:22:08.479568] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.719 [2024-07-22 18:22:08.479606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.719 [2024-07-22 18:22:08.491568] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.719 [2024-07-22 18:22:08.491607] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.719 [2024-07-22 18:22:08.503579] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.719 [2024-07-22 18:22:08.503624] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.719 [2024-07-22 18:22:08.515637] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.719 [2024-07-22 18:22:08.515693] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.719 [2024-07-22 18:22:08.527597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.719 [2024-07-22 18:22:08.527641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.719 [2024-07-22 18:22:08.539560] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.719 [2024-07-22 18:22:08.539599] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.720 [2024-07-22 18:22:08.551611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.720 [2024-07-22 18:22:08.551650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.720 [2024-07-22 18:22:08.563578] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.720 [2024-07-22 18:22:08.563615] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.720 [2024-07-22 18:22:08.575591] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.720 [2024-07-22 18:22:08.575629] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.720 [2024-07-22 18:22:08.587600] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.720 [2024-07-22 18:22:08.587638] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.720 [2024-07-22 18:22:08.599611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.720 [2024-07-22 18:22:08.599651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.720 [2024-07-22 18:22:08.611624] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.720 [2024-07-22 18:22:08.611662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.720 [2024-07-22 18:22:08.623623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.720 [2024-07-22 18:22:08.623661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.720 [2024-07-22 18:22:08.635611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.720 [2024-07-22 18:22:08.635651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.720 [2024-07-22 18:22:08.647658] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.720 [2024-07-22 18:22:08.647705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.720 [2024-07-22 18:22:08.659666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.720 [2024-07-22 18:22:08.659719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.720 [2024-07-22 18:22:08.671656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.720 [2024-07-22 18:22:08.671694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.720 [2024-07-22 18:22:08.683652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.720 [2024-07-22 18:22:08.683690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.720 [2024-07-22 18:22:08.695629] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.720 [2024-07-22 18:22:08.695667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.720 [2024-07-22 18:22:08.707693] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.720 [2024-07-22 18:22:08.707737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.720 [2024-07-22 18:22:08.719738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.720 [2024-07-22 18:22:08.719826] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.720 [2024-07-22 18:22:08.731721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.720 [2024-07-22 18:22:08.731774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.978 [2024-07-22 18:22:08.743699] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.978 [2024-07-22 18:22:08.743738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.978 [2024-07-22 18:22:08.755656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.978 [2024-07-22 18:22:08.755694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.978 [2024-07-22 18:22:08.767681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.978 [2024-07-22 18:22:08.767719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.978 [2024-07-22 18:22:08.779677] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.978 [2024-07-22 18:22:08.779714] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.978 [2024-07-22 18:22:08.791660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.978 [2024-07-22 18:22:08.791699] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.978 [2024-07-22 18:22:08.803691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.978 [2024-07-22 18:22:08.803730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.978 [2024-07-22 18:22:08.815697] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.978 [2024-07-22 18:22:08.815736] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.978 [2024-07-22 18:22:08.827691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.978 [2024-07-22 18:22:08.827731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.978 [2024-07-22 18:22:08.839778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.978 [2024-07-22 18:22:08.839848] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.978 [2024-07-22 18:22:08.851753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.978 [2024-07-22 18:22:08.851805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.978 [2024-07-22 18:22:08.863730] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.978 [2024-07-22 18:22:08.863800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.978 [2024-07-22 18:22:08.875719] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.978 [2024-07-22 18:22:08.875757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.978 [2024-07-22 18:22:08.887731] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.978 [2024-07-22 18:22:08.887784] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.978 [2024-07-22 18:22:08.899731] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.978 [2024-07-22 18:22:08.899801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.978 [2024-07-22 18:22:08.911738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.978 [2024-07-22 18:22:08.911776] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.979 [2024-07-22 18:22:08.923745] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.979 [2024-07-22 18:22:08.923787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.979 [2024-07-22 18:22:08.935751] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.979 [2024-07-22 18:22:08.935791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.979 [2024-07-22 18:22:08.947734] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.979 [2024-07-22 18:22:08.947772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.979 [2024-07-22 18:22:08.959753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.979 [2024-07-22 18:22:08.959791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.979 [2024-07-22 18:22:08.971753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.979 [2024-07-22 18:22:08.971792] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:56.979 [2024-07-22 18:22:08.983739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:56.979 [2024-07-22 18:22:08.983785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.237 [2024-07-22 18:22:08.995766] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.237 [2024-07-22 18:22:08.995804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.237 [2024-07-22 18:22:09.007773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.237 [2024-07-22 18:22:09.007810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.237 [2024-07-22 18:22:09.019755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.237 [2024-07-22 18:22:09.019784] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.237 [2024-07-22 18:22:09.031797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.237 [2024-07-22 18:22:09.031835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.237 [2024-07-22 18:22:09.043767] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.237 [2024-07-22 18:22:09.043805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.237 [2024-07-22 18:22:09.055787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.237 [2024-07-22 18:22:09.055825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.237 [2024-07-22 18:22:09.067789] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.237 [2024-07-22 18:22:09.067827] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.237 [2024-07-22 18:22:09.079782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.237 [2024-07-22 18:22:09.079819] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.237 [2024-07-22 18:22:09.091803] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.237 [2024-07-22 18:22:09.091841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.237 [2024-07-22 18:22:09.103870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.237 [2024-07-22 18:22:09.103922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.237 [2024-07-22 18:22:09.115801] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.237 [2024-07-22 18:22:09.115837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.237 [2024-07-22 18:22:09.127833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.237 [2024-07-22 18:22:09.127870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.237 [2024-07-22 18:22:09.139848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.237 [2024-07-22 18:22:09.139886] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.237 [2024-07-22 18:22:09.159868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.237 [2024-07-22 18:22:09.159912] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.237 [2024-07-22 18:22:09.171862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.237 [2024-07-22 18:22:09.171919] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.237 [2024-07-22 18:22:09.183871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.237 [2024-07-22 18:22:09.183912] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.237 [2024-07-22 18:22:09.195873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.237 [2024-07-22 18:22:09.195913] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.237 [2024-07-22 18:22:09.207878] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.237 [2024-07-22 18:22:09.207919] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.237 [2024-07-22 18:22:09.219913] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.237 [2024-07-22 18:22:09.219969] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.237 [2024-07-22 18:22:09.231908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.237 [2024-07-22 18:22:09.231950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.237 [2024-07-22 18:22:09.243856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.237 [2024-07-22 18:22:09.243894] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.496 [2024-07-22 18:22:09.255890] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.496 [2024-07-22 18:22:09.255929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.496 [2024-07-22 18:22:09.267879] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.496 [2024-07-22 18:22:09.267917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.496 [2024-07-22 18:22:09.279866] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.496 [2024-07-22 18:22:09.279903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.496 [2024-07-22 18:22:09.291886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.496 [2024-07-22 18:22:09.291923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.496 [2024-07-22 18:22:09.303942] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.496 [2024-07-22 18:22:09.303991] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.496 [2024-07-22 18:22:09.315878] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.496 [2024-07-22 18:22:09.315915] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.496 [2024-07-22 18:22:09.327927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.496 [2024-07-22 18:22:09.327965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.496 [2024-07-22 18:22:09.339880] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.496 [2024-07-22 18:22:09.339917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.496 [2024-07-22 18:22:09.351902] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.496 [2024-07-22 18:22:09.351940] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.496 [2024-07-22 18:22:09.363929] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.496 [2024-07-22 18:22:09.363967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.496 [2024-07-22 18:22:09.375892] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.496 [2024-07-22 18:22:09.375944] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.496 [2024-07-22 18:22:09.387953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.496 [2024-07-22 18:22:09.387995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.496 [2024-07-22 18:22:09.400004] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.496 [2024-07-22 18:22:09.400059] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.496 [2024-07-22 18:22:09.412023] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.496 [2024-07-22 18:22:09.412066] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.496 [2024-07-22 18:22:09.423959] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.496 [2024-07-22 18:22:09.423999] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.496 [2024-07-22 18:22:09.435937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.496 [2024-07-22 18:22:09.435976] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.496 [2024-07-22 18:22:09.448029] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.496 [2024-07-22 18:22:09.448083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.496 [2024-07-22 18:22:09.459969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.496 [2024-07-22 18:22:09.460006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.496 [2024-07-22 18:22:09.471996] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.496 [2024-07-22 18:22:09.472034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.496 [2024-07-22 18:22:09.483998] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.496 [2024-07-22 18:22:09.484036] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.496 [2024-07-22 18:22:09.495980] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.496 [2024-07-22 18:22:09.496034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.496 [2024-07-22 18:22:09.507966] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.496 [2024-07-22 18:22:09.508019] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.755 [2024-07-22 18:22:09.519985] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.755 [2024-07-22 18:22:09.520039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.755 [2024-07-22 18:22:09.531976] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.755 [2024-07-22 18:22:09.532029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.755 [2024-07-22 18:22:09.544019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.755 [2024-07-22 18:22:09.544072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.755 [2024-07-22 18:22:09.556015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.755 [2024-07-22 18:22:09.556053] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.755 [2024-07-22 18:22:09.567999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.755 [2024-07-22 18:22:09.568041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.755 [2024-07-22 18:22:09.580025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.755 [2024-07-22 18:22:09.580062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.755 [2024-07-22 18:22:09.592052] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:57.755 [2024-07-22 18:22:09.592090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:57.755 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (71334) - No such process 00:14:57.755 18:22:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 71334 00:14:57.755 18:22:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:57.755 18:22:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.755 18:22:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:57.755 18:22:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.755 18:22:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:57.755 18:22:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.755 18:22:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:57.755 delay0 00:14:57.755 18:22:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.755 18:22:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:14:57.755 18:22:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.755 18:22:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:57.755 18:22:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.755 18:22:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:14:58.046 [2024-07-22 18:22:09.861302] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:04.611 Initializing NVMe Controllers 00:15:04.611 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:04.611 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:04.611 Initialization complete. Launching workers. 00:15:04.611 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 71 00:15:04.611 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 358, failed to submit 33 00:15:04.611 success 234, unsuccess 124, failed 0 00:15:04.611 18:22:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:15:04.611 18:22:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:15:04.611 18:22:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:04.611 18:22:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:15:04.611 18:22:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:04.611 18:22:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:15:04.611 18:22:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:04.611 18:22:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:04.611 rmmod nvme_tcp 00:15:04.611 rmmod nvme_fabrics 00:15:04.611 rmmod nvme_keyring 00:15:04.611 18:22:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:04.611 18:22:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:15:04.611 18:22:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:15:04.611 18:22:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 71167 ']' 00:15:04.611 18:22:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 71167 00:15:04.611 18:22:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 71167 ']' 00:15:04.611 18:22:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 71167 00:15:04.611 18:22:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:15:04.611 18:22:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:04.611 18:22:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71167 00:15:04.611 18:22:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:04.611 killing process with pid 71167 00:15:04.611 18:22:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:04.612 18:22:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71167' 00:15:04.612 18:22:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 71167 00:15:04.612 18:22:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 71167 00:15:05.546 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:05.546 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:05.546 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:05.546 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:05.546 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:05.546 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:05.546 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:05.546 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:05.546 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:05.546 00:15:05.546 real 0m28.900s 00:15:05.546 user 0m47.847s 00:15:05.546 sys 0m7.006s 00:15:05.546 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:05.546 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:05.546 ************************************ 00:15:05.546 END TEST nvmf_zcopy 00:15:05.546 ************************************ 00:15:05.546 18:22:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:15:05.546 18:22:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:05.546 18:22:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:05.546 18:22:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:05.546 18:22:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:15:05.546 ************************************ 00:15:05.546 START TEST nvmf_nmic 00:15:05.546 ************************************ 00:15:05.546 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:05.804 * Looking for test storage... 00:15:05.804 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:05.804 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:05.804 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:15:05.804 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:05.804 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:05.804 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:05.804 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:05.804 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:05.804 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:05.804 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:05.804 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:05.804 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:05.804 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:05.804 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:15:05.804 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=1e224894-a0fc-4112-b81b-a37606f50c96 00:15:05.804 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:05.804 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:05.804 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:05.804 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:05.804 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:05.804 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:05.804 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:05.804 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:05.804 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.804 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.804 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.804 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:15:05.805 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.805 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:15:05.805 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:05.805 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:05.805 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:05.805 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:05.805 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:05.805 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:05.805 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:05.805 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:05.805 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:05.805 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:05.805 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:15:05.805 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:05.805 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:05.805 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:05.805 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:05.805 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:05.805 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:05.805 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:05.805 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:05.805 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:05.805 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:05.805 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:05.805 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:05.805 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:05.805 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:05.805 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:05.805 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:05.805 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:05.805 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:05.805 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:05.805 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:05.805 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:05.805 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:05.805 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:05.805 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:05.805 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:05.805 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:05.805 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:05.805 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:05.805 Cannot find device "nvmf_tgt_br" 00:15:05.805 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:15:05.805 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:05.805 Cannot find device "nvmf_tgt_br2" 00:15:05.805 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:15:05.805 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:05.805 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:05.805 Cannot find device "nvmf_tgt_br" 00:15:05.805 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:15:05.805 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:05.805 Cannot find device "nvmf_tgt_br2" 00:15:05.805 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:15:05.805 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:05.805 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:05.805 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:05.805 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:05.805 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:15:05.805 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:05.805 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:05.805 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:15:05.805 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:05.805 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:05.805 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:05.805 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:05.805 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:06.064 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:06.064 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:06.064 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:06.064 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:06.064 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:06.064 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:06.064 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:06.064 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:06.064 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:06.064 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:06.064 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:06.064 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:06.064 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:06.064 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:06.064 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:06.064 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:06.064 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:06.064 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:06.064 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:06.064 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:06.064 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:15:06.064 00:15:06.064 --- 10.0.0.2 ping statistics --- 00:15:06.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:06.064 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:15:06.064 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:06.064 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:06.064 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.032 ms 00:15:06.064 00:15:06.064 --- 10.0.0.3 ping statistics --- 00:15:06.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:06.064 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:15:06.064 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:06.064 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:06.064 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:15:06.064 00:15:06.064 --- 10.0.0.1 ping statistics --- 00:15:06.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:06.064 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:15:06.064 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:06.064 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:15:06.064 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:06.064 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:06.064 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:06.064 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:06.064 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:06.064 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:06.064 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:06.064 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:15:06.064 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:06.064 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:06.064 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:06.064 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=71684 00:15:06.064 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 71684 00:15:06.064 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:06.064 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 71684 ']' 00:15:06.064 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.064 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:06.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:06.064 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.064 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:06.064 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:06.322 [2024-07-22 18:22:18.098177] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:15:06.322 [2024-07-22 18:22:18.098359] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:06.322 [2024-07-22 18:22:18.271860] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:06.581 [2024-07-22 18:22:18.589328] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:06.581 [2024-07-22 18:22:18.589399] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:06.581 [2024-07-22 18:22:18.589420] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:06.581 [2024-07-22 18:22:18.589438] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:06.581 [2024-07-22 18:22:18.589458] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:06.581 [2024-07-22 18:22:18.589707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:06.581 [2024-07-22 18:22:18.589964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:06.581 [2024-07-22 18:22:18.590774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:06.581 [2024-07-22 18:22:18.590820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.853 [2024-07-22 18:22:18.803158] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:07.128 18:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:07.128 18:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:15:07.128 18:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:07.128 18:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:07.128 18:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:07.128 18:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:07.128 18:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:07.128 18:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.128 18:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:07.128 [2024-07-22 18:22:19.111483] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:07.128 18:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.128 18:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:07.128 18:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.128 18:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:07.387 Malloc0 00:15:07.387 18:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.387 18:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:07.387 18:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.387 18:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:07.387 18:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.387 18:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:07.387 18:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.387 18:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:07.387 18:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.387 18:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:07.387 18:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.387 18:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:07.387 [2024-07-22 18:22:19.260930] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:07.387 18:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.387 test case1: single bdev can't be used in multiple subsystems 00:15:07.387 18:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:15:07.387 18:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:07.387 18:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.387 18:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:07.387 18:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.387 18:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:07.387 18:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.387 18:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:07.387 18:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.387 18:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:15:07.387 18:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:15:07.387 18:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.387 18:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:07.387 [2024-07-22 18:22:19.288712] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:15:07.387 [2024-07-22 18:22:19.288778] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:15:07.387 [2024-07-22 18:22:19.288804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.387 request: 00:15:07.387 { 00:15:07.387 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:07.387 "namespace": { 00:15:07.387 "bdev_name": "Malloc0", 00:15:07.388 "no_auto_visible": false 00:15:07.388 }, 00:15:07.388 "method": "nvmf_subsystem_add_ns", 00:15:07.388 "req_id": 1 00:15:07.388 } 00:15:07.388 Got JSON-RPC error response 00:15:07.388 response: 00:15:07.388 { 00:15:07.388 "code": -32602, 00:15:07.388 "message": "Invalid parameters" 00:15:07.388 } 00:15:07.388 18:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:07.388 18:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:15:07.388 18:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:15:07.388 Adding namespace failed - expected result. 00:15:07.388 18:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:15:07.388 test case2: host connect to nvmf target in multiple paths 00:15:07.388 18:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:15:07.388 18:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:07.388 18:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.388 18:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:07.388 [2024-07-22 18:22:19.300902] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:07.388 18:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.388 18:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid=1e224894-a0fc-4112-b81b-a37606f50c96 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:07.646 18:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid=1e224894-a0fc-4112-b81b-a37606f50c96 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:15:07.646 18:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:15:07.646 18:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:15:07.646 18:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:07.646 18:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:07.646 18:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:15:09.560 18:22:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:09.560 18:22:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:09.560 18:22:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:09.850 18:22:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:09.850 18:22:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:09.850 18:22:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:15:09.850 18:22:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:09.850 [global] 00:15:09.850 thread=1 00:15:09.850 invalidate=1 00:15:09.850 rw=write 00:15:09.850 time_based=1 00:15:09.850 runtime=1 00:15:09.850 ioengine=libaio 00:15:09.850 direct=1 00:15:09.850 bs=4096 00:15:09.850 iodepth=1 00:15:09.850 norandommap=0 00:15:09.850 numjobs=1 00:15:09.850 00:15:09.850 verify_dump=1 00:15:09.850 verify_backlog=512 00:15:09.850 verify_state_save=0 00:15:09.850 do_verify=1 00:15:09.850 verify=crc32c-intel 00:15:09.850 [job0] 00:15:09.850 filename=/dev/nvme0n1 00:15:09.850 Could not set queue depth (nvme0n1) 00:15:09.850 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:09.850 fio-3.35 00:15:09.850 Starting 1 thread 00:15:11.224 00:15:11.224 job0: (groupid=0, jobs=1): err= 0: pid=71780: Mon Jul 22 18:22:22 2024 00:15:11.224 read: IOPS=2253, BW=9015KiB/s (9231kB/s)(9024KiB/1001msec) 00:15:11.224 slat (nsec): min=13100, max=55665, avg=16713.32, stdev=3636.37 00:15:11.224 clat (usec): min=185, max=849, avg=228.85, stdev=21.48 00:15:11.224 lat (usec): min=205, max=880, avg=245.56, stdev=22.15 00:15:11.224 clat percentiles (usec): 00:15:11.224 | 1.00th=[ 198], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 215], 00:15:11.224 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 231], 00:15:11.224 | 70.00th=[ 235], 80.00th=[ 241], 90.00th=[ 249], 95.00th=[ 260], 00:15:11.224 | 99.00th=[ 281], 99.50th=[ 289], 99.90th=[ 326], 99.95th=[ 379], 00:15:11.224 | 99.99th=[ 848] 00:15:11.224 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:15:11.224 slat (usec): min=18, max=172, avg=24.11, stdev= 6.92 00:15:11.224 clat (usec): min=116, max=325, avg=146.05, stdev=14.71 00:15:11.224 lat (usec): min=142, max=498, avg=170.16, stdev=18.02 00:15:11.224 clat percentiles (usec): 00:15:11.224 | 1.00th=[ 124], 5.00th=[ 128], 10.00th=[ 131], 20.00th=[ 135], 00:15:11.224 | 30.00th=[ 139], 40.00th=[ 141], 50.00th=[ 145], 60.00th=[ 147], 00:15:11.224 | 70.00th=[ 151], 80.00th=[ 155], 90.00th=[ 165], 95.00th=[ 174], 00:15:11.224 | 99.00th=[ 192], 99.50th=[ 200], 99.90th=[ 233], 99.95th=[ 281], 00:15:11.224 | 99.99th=[ 326] 00:15:11.224 bw ( KiB/s): min=10864, max=10864, per=100.00%, avg=10864.00, stdev= 0.00, samples=1 00:15:11.224 iops : min= 2716, max= 2716, avg=2716.00, stdev= 0.00, samples=1 00:15:11.224 lat (usec) : 250=95.33%, 500=4.65%, 1000=0.02% 00:15:11.224 cpu : usr=2.30%, sys=7.70%, ctx=4823, majf=0, minf=2 00:15:11.224 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:11.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:11.224 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:11.224 issued rwts: total=2256,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:11.224 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:11.224 00:15:11.224 Run status group 0 (all jobs): 00:15:11.224 READ: bw=9015KiB/s (9231kB/s), 9015KiB/s-9015KiB/s (9231kB/s-9231kB/s), io=9024KiB (9241kB), run=1001-1001msec 00:15:11.224 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:15:11.224 00:15:11.224 Disk stats (read/write): 00:15:11.224 nvme0n1: ios=2098/2270, merge=0/0, ticks=499/346, in_queue=845, util=91.68% 00:15:11.224 18:22:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:11.224 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:11.224 18:22:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:11.224 18:22:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:15:11.224 18:22:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:11.224 18:22:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:11.224 18:22:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:11.224 18:22:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:11.224 18:22:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:15:11.224 18:22:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:15:11.224 18:22:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:15:11.224 18:22:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:11.224 18:22:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:15:11.224 18:22:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:11.224 18:22:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:15:11.224 18:22:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:11.224 18:22:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:11.224 rmmod nvme_tcp 00:15:11.224 rmmod nvme_fabrics 00:15:11.224 rmmod nvme_keyring 00:15:11.224 18:22:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:11.224 18:22:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:15:11.224 18:22:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:15:11.224 18:22:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 71684 ']' 00:15:11.224 18:22:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 71684 00:15:11.224 18:22:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 71684 ']' 00:15:11.224 18:22:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 71684 00:15:11.224 18:22:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:15:11.224 18:22:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:11.224 18:22:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71684 00:15:11.224 18:22:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:11.224 18:22:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:11.224 18:22:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71684' 00:15:11.224 killing process with pid 71684 00:15:11.224 18:22:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 71684 00:15:11.224 18:22:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 71684 00:15:13.123 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:13.123 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:13.123 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:13.123 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:13.123 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:13.123 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:13.123 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:13.123 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:13.123 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:13.123 00:15:13.123 real 0m7.132s 00:15:13.123 user 0m21.552s 00:15:13.123 sys 0m2.239s 00:15:13.123 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:13.123 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:13.123 ************************************ 00:15:13.123 END TEST nvmf_nmic 00:15:13.123 ************************************ 00:15:13.123 18:22:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:15:13.123 18:22:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:13.123 18:22:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:13.123 18:22:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:13.123 18:22:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:15:13.123 ************************************ 00:15:13.123 START TEST nvmf_fio_target 00:15:13.123 ************************************ 00:15:13.123 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:13.123 * Looking for test storage... 00:15:13.123 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:13.123 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:13.123 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:15:13.123 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:13.123 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:13.123 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:13.123 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:13.123 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:13.123 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:13.123 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:13.123 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:13.123 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:13.123 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:13.123 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:15:13.123 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=1e224894-a0fc-4112-b81b-a37606f50c96 00:15:13.123 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:13.123 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:13.123 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:13.123 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:13.123 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:13.123 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:13.123 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:13.123 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:13.123 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.123 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.123 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.123 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:15:13.123 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.123 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:15:13.123 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:13.123 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:13.123 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:13.123 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:13.123 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:13.123 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:13.123 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:13.124 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:13.124 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:13.124 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:13.124 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:13.124 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:15:13.124 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:13.124 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:13.124 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:13.124 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:13.124 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:13.124 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:13.124 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:13.124 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:13.124 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:13.124 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:13.124 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:13.124 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:13.124 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:13.124 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:13.124 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:13.124 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:13.124 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:13.124 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:13.124 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:13.124 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:13.124 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:13.124 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:13.124 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:13.124 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:13.124 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:13.124 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:13.124 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:13.124 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:13.124 Cannot find device "nvmf_tgt_br" 00:15:13.124 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:15:13.124 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:13.124 Cannot find device "nvmf_tgt_br2" 00:15:13.124 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:15:13.124 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:13.124 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:13.124 Cannot find device "nvmf_tgt_br" 00:15:13.124 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:15:13.124 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:13.124 Cannot find device "nvmf_tgt_br2" 00:15:13.124 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:15:13.124 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:13.124 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:13.124 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:13.124 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:13.124 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:15:13.124 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:13.124 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:13.124 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:15:13.124 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:13.124 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:13.124 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:13.124 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:13.124 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:13.124 18:22:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:13.124 18:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:13.124 18:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:13.124 18:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:13.124 18:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:13.124 18:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:13.124 18:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:13.124 18:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:13.124 18:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:13.124 18:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:13.124 18:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:13.124 18:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:13.124 18:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:13.124 18:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:13.124 18:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:13.124 18:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:13.124 18:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:13.124 18:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:13.124 18:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:13.124 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:13.124 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:15:13.124 00:15:13.124 --- 10.0.0.2 ping statistics --- 00:15:13.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:13.124 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:15:13.124 18:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:13.124 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:13.124 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:15:13.124 00:15:13.124 --- 10.0.0.3 ping statistics --- 00:15:13.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:13.124 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:15:13.124 18:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:13.124 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:13.124 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:15:13.124 00:15:13.124 --- 10.0.0.1 ping statistics --- 00:15:13.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:13.124 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:15:13.124 18:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:13.124 18:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:15:13.124 18:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:13.124 18:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:13.124 18:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:13.124 18:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:13.124 18:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:13.124 18:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:13.124 18:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:13.393 18:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:15:13.393 18:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:13.393 18:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:13.393 18:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.393 18:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=71970 00:15:13.393 18:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:13.393 18:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 71970 00:15:13.393 18:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 71970 ']' 00:15:13.393 18:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:13.393 18:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:13.393 18:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:13.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:13.393 18:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:13.393 18:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.393 [2024-07-22 18:22:25.260620] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:15:13.393 [2024-07-22 18:22:25.260777] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:13.665 [2024-07-22 18:22:25.430498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:13.665 [2024-07-22 18:22:25.675685] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:13.665 [2024-07-22 18:22:25.675768] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:13.665 [2024-07-22 18:22:25.675786] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:13.665 [2024-07-22 18:22:25.675801] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:13.665 [2024-07-22 18:22:25.675817] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:13.665 [2024-07-22 18:22:25.676394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:13.665 [2024-07-22 18:22:25.676474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:13.665 [2024-07-22 18:22:25.676551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:13.665 [2024-07-22 18:22:25.676521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:13.923 [2024-07-22 18:22:25.888244] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:14.182 18:22:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:14.182 18:22:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:15:14.182 18:22:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:14.182 18:22:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:14.182 18:22:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.440 18:22:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:14.440 18:22:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:14.698 [2024-07-22 18:22:26.467671] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:14.698 18:22:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:14.956 18:22:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:15:14.956 18:22:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:15.215 18:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:15:15.215 18:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:15.473 18:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:15:15.473 18:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:16.043 18:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:15:16.043 18:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:15:16.301 18:22:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:16.558 18:22:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:15:16.558 18:22:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:16.817 18:22:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:15:16.817 18:22:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:17.075 18:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:15:17.075 18:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:15:17.333 18:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:17.591 18:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:17.591 18:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:17.849 18:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:17.850 18:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:18.107 18:22:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:18.365 [2024-07-22 18:22:30.312080] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:18.365 18:22:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:15:18.623 18:22:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:15:18.880 18:22:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid=1e224894-a0fc-4112-b81b-a37606f50c96 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:19.191 18:22:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:15:19.191 18:22:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:15:19.191 18:22:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:19.191 18:22:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:15:19.191 18:22:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:15:19.191 18:22:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:15:21.086 18:22:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:21.086 18:22:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:21.086 18:22:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:21.086 18:22:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:15:21.086 18:22:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:21.086 18:22:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:15:21.086 18:22:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:21.086 [global] 00:15:21.086 thread=1 00:15:21.086 invalidate=1 00:15:21.086 rw=write 00:15:21.086 time_based=1 00:15:21.086 runtime=1 00:15:21.086 ioengine=libaio 00:15:21.086 direct=1 00:15:21.086 bs=4096 00:15:21.086 iodepth=1 00:15:21.086 norandommap=0 00:15:21.086 numjobs=1 00:15:21.086 00:15:21.086 verify_dump=1 00:15:21.086 verify_backlog=512 00:15:21.086 verify_state_save=0 00:15:21.086 do_verify=1 00:15:21.086 verify=crc32c-intel 00:15:21.086 [job0] 00:15:21.086 filename=/dev/nvme0n1 00:15:21.086 [job1] 00:15:21.086 filename=/dev/nvme0n2 00:15:21.086 [job2] 00:15:21.086 filename=/dev/nvme0n3 00:15:21.086 [job3] 00:15:21.086 filename=/dev/nvme0n4 00:15:21.086 Could not set queue depth (nvme0n1) 00:15:21.086 Could not set queue depth (nvme0n2) 00:15:21.086 Could not set queue depth (nvme0n3) 00:15:21.086 Could not set queue depth (nvme0n4) 00:15:21.343 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:21.343 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:21.343 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:21.343 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:21.343 fio-3.35 00:15:21.343 Starting 4 threads 00:15:22.714 00:15:22.714 job0: (groupid=0, jobs=1): err= 0: pid=72160: Mon Jul 22 18:22:34 2024 00:15:22.714 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:15:22.714 slat (nsec): min=12714, max=51142, avg=19115.75, stdev=4566.45 00:15:22.714 clat (usec): min=181, max=2540, avg=231.51, stdev=61.98 00:15:22.714 lat (usec): min=198, max=2570, avg=250.63, stdev=62.55 00:15:22.714 clat percentiles (usec): 00:15:22.714 | 1.00th=[ 188], 5.00th=[ 198], 10.00th=[ 204], 20.00th=[ 212], 00:15:22.714 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 233], 00:15:22.714 | 70.00th=[ 237], 80.00th=[ 245], 90.00th=[ 255], 95.00th=[ 265], 00:15:22.714 | 99.00th=[ 326], 99.50th=[ 375], 99.90th=[ 652], 99.95th=[ 1221], 00:15:22.714 | 99.99th=[ 2540] 00:15:22.714 write: IOPS=2301, BW=9207KiB/s (9428kB/s)(9216KiB/1001msec); 0 zone resets 00:15:22.714 slat (usec): min=16, max=136, avg=29.25, stdev= 8.57 00:15:22.714 clat (usec): min=126, max=611, avg=177.09, stdev=29.28 00:15:22.714 lat (usec): min=145, max=649, avg=206.34, stdev=32.42 00:15:22.714 clat percentiles (usec): 00:15:22.714 | 1.00th=[ 137], 5.00th=[ 145], 10.00th=[ 151], 20.00th=[ 157], 00:15:22.714 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 174], 60.00th=[ 180], 00:15:22.714 | 70.00th=[ 186], 80.00th=[ 194], 90.00th=[ 206], 95.00th=[ 221], 00:15:22.714 | 99.00th=[ 265], 99.50th=[ 285], 99.90th=[ 510], 99.95th=[ 523], 00:15:22.714 | 99.99th=[ 611] 00:15:22.714 bw ( KiB/s): min= 8632, max= 8632, per=23.67%, avg=8632.00, stdev= 0.00, samples=1 00:15:22.714 iops : min= 2158, max= 2158, avg=2158.00, stdev= 0.00, samples=1 00:15:22.714 lat (usec) : 250=92.46%, 500=7.35%, 750=0.14% 00:15:22.714 lat (msec) : 2=0.02%, 4=0.02% 00:15:22.714 cpu : usr=2.00%, sys=8.80%, ctx=4353, majf=0, minf=13 00:15:22.714 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:22.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:22.714 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:22.714 issued rwts: total=2048,2304,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:22.714 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:22.714 job1: (groupid=0, jobs=1): err= 0: pid=72161: Mon Jul 22 18:22:34 2024 00:15:22.714 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:15:22.714 slat (usec): min=12, max=126, avg=17.73, stdev= 6.03 00:15:22.714 clat (usec): min=150, max=452, avg=222.15, stdev=23.28 00:15:22.714 lat (usec): min=201, max=467, avg=239.88, stdev=25.21 00:15:22.714 clat percentiles (usec): 00:15:22.714 | 1.00th=[ 190], 5.00th=[ 196], 10.00th=[ 198], 20.00th=[ 204], 00:15:22.714 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 223], 00:15:22.714 | 70.00th=[ 229], 80.00th=[ 239], 90.00th=[ 253], 95.00th=[ 265], 00:15:22.714 | 99.00th=[ 289], 99.50th=[ 297], 99.90th=[ 412], 99.95th=[ 437], 00:15:22.714 | 99.99th=[ 453] 00:15:22.714 write: IOPS=2523, BW=9.86MiB/s (10.3MB/s)(9.87MiB/1001msec); 0 zone resets 00:15:22.714 slat (nsec): min=17105, max=98468, avg=27638.60, stdev=8347.00 00:15:22.714 clat (usec): min=127, max=367, avg=169.60, stdev=24.73 00:15:22.714 lat (usec): min=146, max=465, avg=197.24, stdev=30.60 00:15:22.714 clat percentiles (usec): 00:15:22.714 | 1.00th=[ 133], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 149], 00:15:22.714 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 165], 60.00th=[ 172], 00:15:22.714 | 70.00th=[ 180], 80.00th=[ 192], 90.00th=[ 208], 95.00th=[ 217], 00:15:22.714 | 99.00th=[ 233], 99.50th=[ 237], 99.90th=[ 251], 99.95th=[ 277], 00:15:22.714 | 99.99th=[ 367] 00:15:22.714 bw ( KiB/s): min=10040, max=10040, per=27.53%, avg=10040.00, stdev= 0.00, samples=1 00:15:22.714 iops : min= 2510, max= 2510, avg=2510.00, stdev= 0.00, samples=1 00:15:22.714 lat (usec) : 250=94.71%, 500=5.29% 00:15:22.714 cpu : usr=2.50%, sys=8.00%, ctx=4585, majf=0, minf=6 00:15:22.714 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:22.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:22.714 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:22.714 issued rwts: total=2048,2526,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:22.714 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:22.714 job2: (groupid=0, jobs=1): err= 0: pid=72162: Mon Jul 22 18:22:34 2024 00:15:22.714 read: IOPS=1859, BW=7437KiB/s (7615kB/s)(7444KiB/1001msec) 00:15:22.714 slat (nsec): min=15381, max=60775, avg=26227.13, stdev=7222.18 00:15:22.714 clat (usec): min=198, max=474, avg=244.65, stdev=22.56 00:15:22.714 lat (usec): min=227, max=498, avg=270.87, stdev=22.55 00:15:22.714 clat percentiles (usec): 00:15:22.714 | 1.00th=[ 206], 5.00th=[ 215], 10.00th=[ 219], 20.00th=[ 225], 00:15:22.714 | 30.00th=[ 233], 40.00th=[ 237], 50.00th=[ 243], 60.00th=[ 249], 00:15:22.714 | 70.00th=[ 255], 80.00th=[ 262], 90.00th=[ 273], 95.00th=[ 281], 00:15:22.714 | 99.00th=[ 306], 99.50th=[ 322], 99.90th=[ 424], 99.95th=[ 474], 00:15:22.714 | 99.99th=[ 474] 00:15:22.714 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:15:22.714 slat (usec): min=19, max=155, avg=38.04, stdev=10.48 00:15:22.714 clat (usec): min=141, max=2843, avg=198.13, stdev=68.39 00:15:22.714 lat (usec): min=177, max=2884, avg=236.17, stdev=69.23 00:15:22.714 clat percentiles (usec): 00:15:22.714 | 1.00th=[ 147], 5.00th=[ 155], 10.00th=[ 161], 20.00th=[ 169], 00:15:22.714 | 30.00th=[ 176], 40.00th=[ 182], 50.00th=[ 190], 60.00th=[ 198], 00:15:22.714 | 70.00th=[ 210], 80.00th=[ 225], 90.00th=[ 245], 95.00th=[ 260], 00:15:22.714 | 99.00th=[ 289], 99.50th=[ 302], 99.90th=[ 553], 99.95th=[ 668], 00:15:22.714 | 99.99th=[ 2835] 00:15:22.714 bw ( KiB/s): min= 8192, max= 8192, per=22.47%, avg=8192.00, stdev= 0.00, samples=1 00:15:22.714 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:15:22.714 lat (usec) : 250=78.41%, 500=21.51%, 750=0.05% 00:15:22.714 lat (msec) : 4=0.03% 00:15:22.714 cpu : usr=3.00%, sys=9.60%, ctx=3910, majf=0, minf=7 00:15:22.714 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:22.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:22.714 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:22.714 issued rwts: total=1861,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:22.714 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:22.714 job3: (groupid=0, jobs=1): err= 0: pid=72163: Mon Jul 22 18:22:34 2024 00:15:22.714 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:15:22.714 slat (nsec): min=12935, max=62058, avg=19353.53, stdev=4555.99 00:15:22.714 clat (usec): min=191, max=1393, avg=236.89, stdev=37.09 00:15:22.714 lat (usec): min=207, max=1422, avg=256.24, stdev=37.95 00:15:22.714 clat percentiles (usec): 00:15:22.714 | 1.00th=[ 198], 5.00th=[ 204], 10.00th=[ 210], 20.00th=[ 219], 00:15:22.714 | 30.00th=[ 223], 40.00th=[ 229], 50.00th=[ 233], 60.00th=[ 239], 00:15:22.714 | 70.00th=[ 245], 80.00th=[ 253], 90.00th=[ 265], 95.00th=[ 277], 00:15:22.714 | 99.00th=[ 314], 99.50th=[ 355], 99.90th=[ 627], 99.95th=[ 693], 00:15:22.714 | 99.99th=[ 1401] 00:15:22.714 write: IOPS=2244, BW=8979KiB/s (9195kB/s)(8988KiB/1001msec); 0 zone resets 00:15:22.714 slat (usec): min=14, max=132, avg=26.68, stdev= 7.20 00:15:22.714 clat (usec): min=136, max=784, avg=180.29, stdev=24.64 00:15:22.714 lat (usec): min=157, max=917, avg=206.96, stdev=27.51 00:15:22.714 clat percentiles (usec): 00:15:22.714 | 1.00th=[ 143], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 163], 00:15:22.714 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 182], 00:15:22.714 | 70.00th=[ 188], 80.00th=[ 196], 90.00th=[ 208], 95.00th=[ 219], 00:15:22.714 | 99.00th=[ 249], 99.50th=[ 260], 99.90th=[ 293], 99.95th=[ 302], 00:15:22.714 | 99.99th=[ 783] 00:15:22.714 bw ( KiB/s): min= 8976, max= 8976, per=24.62%, avg=8976.00, stdev= 0.00, samples=1 00:15:22.714 iops : min= 2244, max= 2244, avg=2244.00, stdev= 0.00, samples=1 00:15:22.714 lat (usec) : 250=88.66%, 500=11.25%, 750=0.05%, 1000=0.02% 00:15:22.714 lat (msec) : 2=0.02% 00:15:22.714 cpu : usr=2.10%, sys=7.70%, ctx=4295, majf=0, minf=9 00:15:22.714 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:22.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:22.714 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:22.714 issued rwts: total=2048,2247,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:22.714 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:22.714 00:15:22.714 Run status group 0 (all jobs): 00:15:22.714 READ: bw=31.2MiB/s (32.8MB/s), 7437KiB/s-8184KiB/s (7615kB/s-8380kB/s), io=31.3MiB (32.8MB), run=1001-1001msec 00:15:22.714 WRITE: bw=35.6MiB/s (37.3MB/s), 8184KiB/s-9.86MiB/s (8380kB/s-10.3MB/s), io=35.6MiB (37.4MB), run=1001-1001msec 00:15:22.714 00:15:22.715 Disk stats (read/write): 00:15:22.715 nvme0n1: ios=1734/2048, merge=0/0, ticks=429/395, in_queue=824, util=88.48% 00:15:22.715 nvme0n2: ios=1961/2048, merge=0/0, ticks=464/350, in_queue=814, util=87.75% 00:15:22.715 nvme0n3: ios=1536/1792, merge=0/0, ticks=391/379, in_queue=770, util=88.73% 00:15:22.715 nvme0n4: ios=1668/2048, merge=0/0, ticks=400/396, in_queue=796, util=89.64% 00:15:22.715 18:22:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:15:22.715 [global] 00:15:22.715 thread=1 00:15:22.715 invalidate=1 00:15:22.715 rw=randwrite 00:15:22.715 time_based=1 00:15:22.715 runtime=1 00:15:22.715 ioengine=libaio 00:15:22.715 direct=1 00:15:22.715 bs=4096 00:15:22.715 iodepth=1 00:15:22.715 norandommap=0 00:15:22.715 numjobs=1 00:15:22.715 00:15:22.715 verify_dump=1 00:15:22.715 verify_backlog=512 00:15:22.715 verify_state_save=0 00:15:22.715 do_verify=1 00:15:22.715 verify=crc32c-intel 00:15:22.715 [job0] 00:15:22.715 filename=/dev/nvme0n1 00:15:22.715 [job1] 00:15:22.715 filename=/dev/nvme0n2 00:15:22.715 [job2] 00:15:22.715 filename=/dev/nvme0n3 00:15:22.715 [job3] 00:15:22.715 filename=/dev/nvme0n4 00:15:22.715 Could not set queue depth (nvme0n1) 00:15:22.715 Could not set queue depth (nvme0n2) 00:15:22.715 Could not set queue depth (nvme0n3) 00:15:22.715 Could not set queue depth (nvme0n4) 00:15:22.715 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:22.715 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:22.715 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:22.715 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:22.715 fio-3.35 00:15:22.715 Starting 4 threads 00:15:24.087 00:15:24.087 job0: (groupid=0, jobs=1): err= 0: pid=72217: Mon Jul 22 18:22:35 2024 00:15:24.087 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:15:24.087 slat (nsec): min=13109, max=69966, avg=15791.44, stdev=3295.04 00:15:24.087 clat (usec): min=169, max=431, avg=199.27, stdev=14.72 00:15:24.087 lat (usec): min=184, max=447, avg=215.06, stdev=15.63 00:15:24.087 clat percentiles (usec): 00:15:24.088 | 1.00th=[ 180], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 188], 00:15:24.088 | 30.00th=[ 192], 40.00th=[ 194], 50.00th=[ 196], 60.00th=[ 200], 00:15:24.088 | 70.00th=[ 204], 80.00th=[ 210], 90.00th=[ 217], 95.00th=[ 223], 00:15:24.088 | 99.00th=[ 239], 99.50th=[ 245], 99.90th=[ 334], 99.95th=[ 347], 00:15:24.088 | 99.99th=[ 433] 00:15:24.088 write: IOPS=2611, BW=10.2MiB/s (10.7MB/s)(10.2MiB/1001msec); 0 zone resets 00:15:24.088 slat (usec): min=17, max=105, avg=22.13, stdev= 3.71 00:15:24.088 clat (usec): min=119, max=1686, avg=145.90, stdev=33.88 00:15:24.088 lat (usec): min=138, max=1708, avg=168.04, stdev=34.64 00:15:24.088 clat percentiles (usec): 00:15:24.088 | 1.00th=[ 124], 5.00th=[ 127], 10.00th=[ 129], 20.00th=[ 133], 00:15:24.088 | 30.00th=[ 135], 40.00th=[ 139], 50.00th=[ 143], 60.00th=[ 149], 00:15:24.088 | 70.00th=[ 153], 80.00th=[ 159], 90.00th=[ 165], 95.00th=[ 172], 00:15:24.088 | 99.00th=[ 182], 99.50th=[ 186], 99.90th=[ 314], 99.95th=[ 383], 00:15:24.088 | 99.99th=[ 1680] 00:15:24.088 bw ( KiB/s): min=12288, max=12288, per=38.18%, avg=12288.00, stdev= 0.00, samples=1 00:15:24.088 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:15:24.088 lat (usec) : 250=99.75%, 500=0.23% 00:15:24.088 lat (msec) : 2=0.02% 00:15:24.088 cpu : usr=2.50%, sys=7.60%, ctx=5174, majf=0, minf=9 00:15:24.088 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:24.088 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:24.088 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:24.088 issued rwts: total=2560,2614,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:24.088 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:24.088 job1: (groupid=0, jobs=1): err= 0: pid=72218: Mon Jul 22 18:22:35 2024 00:15:24.088 read: IOPS=1524, BW=6098KiB/s (6244kB/s)(6104KiB/1001msec) 00:15:24.088 slat (nsec): min=9729, max=51371, avg=16998.69, stdev=5320.84 00:15:24.088 clat (usec): min=270, max=1605, avg=336.02, stdev=38.80 00:15:24.088 lat (usec): min=287, max=1620, avg=353.02, stdev=38.55 00:15:24.088 clat percentiles (usec): 00:15:24.088 | 1.00th=[ 289], 5.00th=[ 306], 10.00th=[ 310], 20.00th=[ 318], 00:15:24.088 | 30.00th=[ 326], 40.00th=[ 330], 50.00th=[ 334], 60.00th=[ 338], 00:15:24.088 | 70.00th=[ 347], 80.00th=[ 351], 90.00th=[ 359], 95.00th=[ 367], 00:15:24.088 | 99.00th=[ 392], 99.50th=[ 400], 99.90th=[ 578], 99.95th=[ 1598], 00:15:24.088 | 99.99th=[ 1598] 00:15:24.088 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:15:24.088 slat (usec): min=14, max=105, avg=28.11, stdev= 9.87 00:15:24.088 clat (usec): min=150, max=841, avg=267.74, stdev=24.02 00:15:24.088 lat (usec): min=219, max=870, avg=295.85, stdev=25.14 00:15:24.088 clat percentiles (usec): 00:15:24.088 | 1.00th=[ 229], 5.00th=[ 241], 10.00th=[ 247], 20.00th=[ 253], 00:15:24.088 | 30.00th=[ 258], 40.00th=[ 262], 50.00th=[ 265], 60.00th=[ 273], 00:15:24.088 | 70.00th=[ 277], 80.00th=[ 281], 90.00th=[ 289], 95.00th=[ 302], 00:15:24.088 | 99.00th=[ 318], 99.50th=[ 343], 99.90th=[ 388], 99.95th=[ 840], 00:15:24.088 | 99.99th=[ 840] 00:15:24.088 bw ( KiB/s): min= 8192, max= 8192, per=25.45%, avg=8192.00, stdev= 0.00, samples=1 00:15:24.088 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:15:24.088 lat (usec) : 250=7.81%, 500=92.10%, 750=0.03%, 1000=0.03% 00:15:24.088 lat (msec) : 2=0.03% 00:15:24.088 cpu : usr=1.20%, sys=6.30%, ctx=3064, majf=0, minf=9 00:15:24.088 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:24.088 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:24.088 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:24.088 issued rwts: total=1526,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:24.088 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:24.088 job2: (groupid=0, jobs=1): err= 0: pid=72219: Mon Jul 22 18:22:35 2024 00:15:24.088 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:15:24.088 slat (nsec): min=12478, max=56170, avg=16598.96, stdev=4695.83 00:15:24.088 clat (usec): min=195, max=714, avg=245.42, stdev=30.87 00:15:24.088 lat (usec): min=210, max=738, avg=262.02, stdev=31.30 00:15:24.088 clat percentiles (usec): 00:15:24.088 | 1.00th=[ 202], 5.00th=[ 206], 10.00th=[ 212], 20.00th=[ 221], 00:15:24.088 | 30.00th=[ 229], 40.00th=[ 237], 50.00th=[ 245], 60.00th=[ 251], 00:15:24.088 | 70.00th=[ 258], 80.00th=[ 265], 90.00th=[ 277], 95.00th=[ 289], 00:15:24.088 | 99.00th=[ 355], 99.50th=[ 367], 99.90th=[ 449], 99.95th=[ 523], 00:15:24.088 | 99.99th=[ 717] 00:15:24.088 write: IOPS=2365, BW=9463KiB/s (9690kB/s)(9472KiB/1001msec); 0 zone resets 00:15:24.088 slat (nsec): min=17571, max=93144, avg=20899.26, stdev=4420.63 00:15:24.088 clat (usec): min=130, max=666, avg=171.33, stdev=30.77 00:15:24.088 lat (usec): min=150, max=690, avg=192.23, stdev=32.24 00:15:24.088 clat percentiles (usec): 00:15:24.088 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 141], 20.00th=[ 147], 00:15:24.088 | 30.00th=[ 153], 40.00th=[ 161], 50.00th=[ 172], 60.00th=[ 176], 00:15:24.088 | 70.00th=[ 182], 80.00th=[ 190], 90.00th=[ 200], 95.00th=[ 210], 00:15:24.088 | 99.00th=[ 273], 99.50th=[ 310], 99.90th=[ 498], 99.95th=[ 523], 00:15:24.088 | 99.99th=[ 668] 00:15:24.088 bw ( KiB/s): min= 8192, max= 8192, per=25.45%, avg=8192.00, stdev= 0.00, samples=1 00:15:24.088 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:15:24.088 lat (usec) : 250=79.82%, 500=20.09%, 750=0.09% 00:15:24.088 cpu : usr=2.10%, sys=6.20%, ctx=4419, majf=0, minf=15 00:15:24.088 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:24.088 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:24.088 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:24.088 issued rwts: total=2048,2368,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:24.088 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:24.088 job3: (groupid=0, jobs=1): err= 0: pid=72220: Mon Jul 22 18:22:35 2024 00:15:24.088 read: IOPS=1525, BW=6102KiB/s (6248kB/s)(6108KiB/1001msec) 00:15:24.088 slat (nsec): min=9713, max=57494, avg=19266.53, stdev=6612.35 00:15:24.088 clat (usec): min=212, max=1573, avg=333.29, stdev=37.64 00:15:24.088 lat (usec): min=236, max=1601, avg=352.56, stdev=37.75 00:15:24.088 clat percentiles (usec): 00:15:24.088 | 1.00th=[ 293], 5.00th=[ 306], 10.00th=[ 310], 20.00th=[ 318], 00:15:24.088 | 30.00th=[ 322], 40.00th=[ 326], 50.00th=[ 334], 60.00th=[ 338], 00:15:24.088 | 70.00th=[ 343], 80.00th=[ 347], 90.00th=[ 355], 95.00th=[ 363], 00:15:24.088 | 99.00th=[ 375], 99.50th=[ 383], 99.90th=[ 570], 99.95th=[ 1582], 00:15:24.088 | 99.99th=[ 1582] 00:15:24.088 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:15:24.088 slat (nsec): min=11818, max=79561, avg=25605.01, stdev=8795.91 00:15:24.088 clat (usec): min=162, max=850, avg=270.37, stdev=24.71 00:15:24.088 lat (usec): min=231, max=873, avg=295.98, stdev=25.16 00:15:24.088 clat percentiles (usec): 00:15:24.088 | 1.00th=[ 231], 5.00th=[ 243], 10.00th=[ 249], 20.00th=[ 255], 00:15:24.088 | 30.00th=[ 260], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 273], 00:15:24.088 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 293], 95.00th=[ 302], 00:15:24.088 | 99.00th=[ 330], 99.50th=[ 351], 99.90th=[ 429], 99.95th=[ 848], 00:15:24.088 | 99.99th=[ 848] 00:15:24.088 bw ( KiB/s): min= 8208, max= 8208, per=25.50%, avg=8208.00, stdev= 0.00, samples=1 00:15:24.088 iops : min= 2052, max= 2052, avg=2052.00, stdev= 0.00, samples=1 00:15:24.088 lat (usec) : 250=6.11%, 500=93.76%, 750=0.07%, 1000=0.03% 00:15:24.088 lat (msec) : 2=0.03% 00:15:24.088 cpu : usr=2.40%, sys=5.20%, ctx=3063, majf=0, minf=12 00:15:24.088 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:24.088 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:24.088 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:24.088 issued rwts: total=1527,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:24.088 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:24.088 00:15:24.088 Run status group 0 (all jobs): 00:15:24.088 READ: bw=29.9MiB/s (31.3MB/s), 6098KiB/s-9.99MiB/s (6244kB/s-10.5MB/s), io=29.9MiB (31.4MB), run=1001-1001msec 00:15:24.088 WRITE: bw=31.4MiB/s (33.0MB/s), 6138KiB/s-10.2MiB/s (6285kB/s-10.7MB/s), io=31.5MiB (33.0MB), run=1001-1001msec 00:15:24.088 00:15:24.088 Disk stats (read/write): 00:15:24.088 nvme0n1: ios=2098/2395, merge=0/0, ticks=471/378, in_queue=849, util=88.78% 00:15:24.088 nvme0n2: ios=1150/1536, merge=0/0, ticks=359/394, in_queue=753, util=87.93% 00:15:24.088 nvme0n3: ios=1728/2048, merge=0/0, ticks=433/372, in_queue=805, util=89.21% 00:15:24.088 nvme0n4: ios=1122/1536, merge=0/0, ticks=368/376, in_queue=744, util=89.67% 00:15:24.088 18:22:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:15:24.088 [global] 00:15:24.088 thread=1 00:15:24.088 invalidate=1 00:15:24.088 rw=write 00:15:24.088 time_based=1 00:15:24.088 runtime=1 00:15:24.088 ioengine=libaio 00:15:24.088 direct=1 00:15:24.088 bs=4096 00:15:24.088 iodepth=128 00:15:24.088 norandommap=0 00:15:24.088 numjobs=1 00:15:24.088 00:15:24.088 verify_dump=1 00:15:24.088 verify_backlog=512 00:15:24.088 verify_state_save=0 00:15:24.088 do_verify=1 00:15:24.088 verify=crc32c-intel 00:15:24.088 [job0] 00:15:24.088 filename=/dev/nvme0n1 00:15:24.088 [job1] 00:15:24.088 filename=/dev/nvme0n2 00:15:24.088 [job2] 00:15:24.088 filename=/dev/nvme0n3 00:15:24.088 [job3] 00:15:24.088 filename=/dev/nvme0n4 00:15:24.088 Could not set queue depth (nvme0n1) 00:15:24.088 Could not set queue depth (nvme0n2) 00:15:24.088 Could not set queue depth (nvme0n3) 00:15:24.088 Could not set queue depth (nvme0n4) 00:15:24.088 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:24.088 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:24.088 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:24.088 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:24.088 fio-3.35 00:15:24.088 Starting 4 threads 00:15:25.462 00:15:25.462 job0: (groupid=0, jobs=1): err= 0: pid=72274: Mon Jul 22 18:22:37 2024 00:15:25.462 read: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec) 00:15:25.462 slat (usec): min=6, max=5192, avg=91.91, stdev=430.25 00:15:25.462 clat (usec): min=9227, max=15271, avg=12390.52, stdev=670.44 00:15:25.462 lat (usec): min=11174, max=15286, avg=12482.43, stdev=518.97 00:15:25.462 clat percentiles (usec): 00:15:25.462 | 1.00th=[ 9765], 5.00th=[11731], 10.00th=[11994], 20.00th=[12125], 00:15:25.462 | 30.00th=[12256], 40.00th=[12256], 50.00th=[12387], 60.00th=[12518], 00:15:25.462 | 70.00th=[12518], 80.00th=[12649], 90.00th=[12780], 95.00th=[12911], 00:15:25.462 | 99.00th=[15139], 99.50th=[15139], 99.90th=[15270], 99.95th=[15270], 00:15:25.462 | 99.99th=[15270] 00:15:25.462 write: IOPS=5275, BW=20.6MiB/s (21.6MB/s)(20.6MiB/1001msec); 0 zone resets 00:15:25.462 slat (usec): min=8, max=2735, avg=92.40, stdev=391.24 00:15:25.462 clat (usec): min=303, max=13115, avg=11939.03, stdev=1000.62 00:15:25.462 lat (usec): min=2878, max=13150, avg=12031.44, stdev=921.54 00:15:25.462 clat percentiles (usec): 00:15:25.462 | 1.00th=[ 6194], 5.00th=[11338], 10.00th=[11731], 20.00th=[11863], 00:15:25.462 | 30.00th=[11863], 40.00th=[11994], 50.00th=[11994], 60.00th=[12125], 00:15:25.462 | 70.00th=[12256], 80.00th=[12387], 90.00th=[12518], 95.00th=[12780], 00:15:25.462 | 99.00th=[12911], 99.50th=[13042], 99.90th=[13042], 99.95th=[13042], 00:15:25.462 | 99.99th=[13173] 00:15:25.462 bw ( KiB/s): min=20480, max=20480, per=34.28%, avg=20480.00, stdev= 0.00, samples=1 00:15:25.462 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:15:25.462 lat (usec) : 500=0.01% 00:15:25.462 lat (msec) : 4=0.31%, 10=2.67%, 20=97.01% 00:15:25.462 cpu : usr=5.30%, sys=13.70%, ctx=328, majf=0, minf=9 00:15:25.462 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:15:25.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:25.462 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:25.462 issued rwts: total=5120,5281,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:25.462 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:25.462 job1: (groupid=0, jobs=1): err= 0: pid=72275: Mon Jul 22 18:22:37 2024 00:15:25.462 read: IOPS=2434, BW=9739KiB/s (9973kB/s)(9788KiB/1005msec) 00:15:25.462 slat (usec): min=8, max=8560, avg=222.40, stdev=851.44 00:15:25.462 clat (usec): min=1772, max=39027, avg=28257.44, stdev=4291.67 00:15:25.462 lat (usec): min=9641, max=39055, avg=28479.84, stdev=4285.05 00:15:25.462 clat percentiles (usec): 00:15:25.462 | 1.00th=[10683], 5.00th=[21627], 10.00th=[22938], 20.00th=[25822], 00:15:25.462 | 30.00th=[26608], 40.00th=[27132], 50.00th=[28443], 60.00th=[29492], 00:15:25.462 | 70.00th=[30540], 80.00th=[31327], 90.00th=[33424], 95.00th=[34341], 00:15:25.462 | 99.00th=[36963], 99.50th=[37487], 99.90th=[38536], 99.95th=[38536], 00:15:25.462 | 99.99th=[39060] 00:15:25.462 write: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec); 0 zone resets 00:15:25.462 slat (usec): min=12, max=8472, avg=170.79, stdev=747.57 00:15:25.462 clat (usec): min=12427, max=36899, avg=22481.05, stdev=4296.37 00:15:25.462 lat (usec): min=12453, max=36923, avg=22651.83, stdev=4315.59 00:15:25.462 clat percentiles (usec): 00:15:25.462 | 1.00th=[15270], 5.00th=[16712], 10.00th=[17695], 20.00th=[19006], 00:15:25.462 | 30.00th=[19530], 40.00th=[20579], 50.00th=[21627], 60.00th=[22938], 00:15:25.462 | 70.00th=[24511], 80.00th=[26084], 90.00th=[28967], 95.00th=[30540], 00:15:25.462 | 99.00th=[34866], 99.50th=[34866], 99.90th=[36963], 99.95th=[36963], 00:15:25.462 | 99.99th=[36963] 00:15:25.462 bw ( KiB/s): min= 8968, max=11535, per=17.16%, avg=10251.50, stdev=1815.14, samples=2 00:15:25.462 iops : min= 2242, max= 2883, avg=2562.50, stdev=453.26, samples=2 00:15:25.462 lat (msec) : 2=0.02%, 10=0.22%, 20=18.15%, 50=81.61% 00:15:25.462 cpu : usr=3.19%, sys=7.27%, ctx=641, majf=0, minf=5 00:15:25.462 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:15:25.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:25.462 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:25.462 issued rwts: total=2447,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:25.462 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:25.462 job2: (groupid=0, jobs=1): err= 0: pid=72276: Mon Jul 22 18:22:37 2024 00:15:25.462 read: IOPS=2086, BW=8347KiB/s (8548kB/s)(8364KiB/1002msec) 00:15:25.462 slat (usec): min=5, max=8367, avg=223.55, stdev=834.84 00:15:25.462 clat (usec): min=738, max=41653, avg=28456.83, stdev=4490.20 00:15:25.462 lat (usec): min=1627, max=45374, avg=28680.38, stdev=4484.09 00:15:25.462 clat percentiles (usec): 00:15:25.462 | 1.00th=[ 9372], 5.00th=[22938], 10.00th=[24511], 20.00th=[26608], 00:15:25.462 | 30.00th=[26870], 40.00th=[27132], 50.00th=[27919], 60.00th=[28967], 00:15:25.462 | 70.00th=[30016], 80.00th=[31065], 90.00th=[33817], 95.00th=[35914], 00:15:25.462 | 99.00th=[39584], 99.50th=[40109], 99.90th=[41681], 99.95th=[41681], 00:15:25.462 | 99.99th=[41681] 00:15:25.462 write: IOPS=2554, BW=9.98MiB/s (10.5MB/s)(10.0MiB/1002msec); 0 zone resets 00:15:25.462 slat (usec): min=10, max=6957, avg=200.57, stdev=757.47 00:15:25.462 clat (usec): min=13585, max=39141, avg=25796.56, stdev=4419.65 00:15:25.462 lat (usec): min=13743, max=39166, avg=25997.13, stdev=4424.89 00:15:25.462 clat percentiles (usec): 00:15:25.462 | 1.00th=[17957], 5.00th=[19792], 10.00th=[20579], 20.00th=[22152], 00:15:25.462 | 30.00th=[23462], 40.00th=[24511], 50.00th=[25297], 60.00th=[26346], 00:15:25.462 | 70.00th=[27395], 80.00th=[28705], 90.00th=[31327], 95.00th=[32900], 00:15:25.462 | 99.00th=[39060], 99.50th=[39060], 99.90th=[39060], 99.95th=[39060], 00:15:25.462 | 99.99th=[39060] 00:15:25.462 bw ( KiB/s): min= 9269, max= 9269, per=15.52%, avg=9269.00, stdev= 0.00, samples=1 00:15:25.462 iops : min= 2317, max= 2317, avg=2317.00, stdev= 0.00, samples=1 00:15:25.462 lat (usec) : 750=0.02% 00:15:25.462 lat (msec) : 2=0.09%, 10=0.39%, 20=3.74%, 50=95.76% 00:15:25.463 cpu : usr=2.40%, sys=7.19%, ctx=714, majf=0, minf=15 00:15:25.463 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:15:25.463 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:25.463 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:25.463 issued rwts: total=2091,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:25.463 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:25.463 job3: (groupid=0, jobs=1): err= 0: pid=72277: Mon Jul 22 18:22:37 2024 00:15:25.463 read: IOPS=4128, BW=16.1MiB/s (16.9MB/s)(16.2MiB/1002msec) 00:15:25.463 slat (usec): min=5, max=4368, avg=111.50, stdev=438.24 00:15:25.463 clat (usec): min=615, max=18510, avg=14099.03, stdev=1596.74 00:15:25.463 lat (usec): min=4750, max=18545, avg=14210.53, stdev=1630.61 00:15:25.463 clat percentiles (usec): 00:15:25.463 | 1.00th=[10290], 5.00th=[11600], 10.00th=[12256], 20.00th=[13566], 00:15:25.463 | 30.00th=[13829], 40.00th=[13960], 50.00th=[14091], 60.00th=[14222], 00:15:25.463 | 70.00th=[14484], 80.00th=[15008], 90.00th=[16057], 95.00th=[16712], 00:15:25.463 | 99.00th=[17433], 99.50th=[17695], 99.90th=[18220], 99.95th=[18482], 00:15:25.463 | 99.99th=[18482] 00:15:25.463 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:15:25.463 slat (usec): min=11, max=6261, avg=108.93, stdev=419.03 00:15:25.463 clat (usec): min=5339, max=22603, avg=14796.05, stdev=1532.88 00:15:25.463 lat (usec): min=5377, max=22682, avg=14904.98, stdev=1567.19 00:15:25.463 clat percentiles (usec): 00:15:25.463 | 1.00th=[10290], 5.00th=[12780], 10.00th=[13435], 20.00th=[13829], 00:15:25.463 | 30.00th=[14091], 40.00th=[14353], 50.00th=[14615], 60.00th=[14877], 00:15:25.463 | 70.00th=[15008], 80.00th=[15795], 90.00th=[17171], 95.00th=[17695], 00:15:25.463 | 99.00th=[18220], 99.50th=[18482], 99.90th=[21103], 99.95th=[21627], 00:15:25.463 | 99.99th=[22676] 00:15:25.463 bw ( KiB/s): min=17523, max=18680, per=30.30%, avg=18101.50, stdev=818.12, samples=2 00:15:25.463 iops : min= 4380, max= 4672, avg=4526.00, stdev=206.48, samples=2 00:15:25.463 lat (usec) : 750=0.01% 00:15:25.463 lat (msec) : 10=0.96%, 20=98.90%, 50=0.13% 00:15:25.463 cpu : usr=3.50%, sys=14.19%, ctx=563, majf=0, minf=8 00:15:25.463 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:15:25.463 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:25.463 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:25.463 issued rwts: total=4137,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:25.463 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:25.463 00:15:25.463 Run status group 0 (all jobs): 00:15:25.463 READ: bw=53.6MiB/s (56.2MB/s), 8347KiB/s-20.0MiB/s (8548kB/s-20.9MB/s), io=53.9MiB (56.5MB), run=1001-1005msec 00:15:25.463 WRITE: bw=58.3MiB/s (61.2MB/s), 9.95MiB/s-20.6MiB/s (10.4MB/s-21.6MB/s), io=58.6MiB (61.5MB), run=1001-1005msec 00:15:25.463 00:15:25.463 Disk stats (read/write): 00:15:25.463 nvme0n1: ios=4434/4608, merge=0/0, ticks=12224/11924, in_queue=24148, util=90.37% 00:15:25.463 nvme0n2: ios=2097/2290, merge=0/0, ticks=19136/15151, in_queue=34287, util=88.77% 00:15:25.463 nvme0n3: ios=1911/2048, merge=0/0, ticks=17378/16401, in_queue=33779, util=88.36% 00:15:25.463 nvme0n4: ios=3601/3975, merge=0/0, ticks=16448/17334, in_queue=33782, util=90.05% 00:15:25.463 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:15:25.463 [global] 00:15:25.463 thread=1 00:15:25.463 invalidate=1 00:15:25.463 rw=randwrite 00:15:25.463 time_based=1 00:15:25.463 runtime=1 00:15:25.463 ioengine=libaio 00:15:25.463 direct=1 00:15:25.463 bs=4096 00:15:25.463 iodepth=128 00:15:25.463 norandommap=0 00:15:25.463 numjobs=1 00:15:25.463 00:15:25.463 verify_dump=1 00:15:25.463 verify_backlog=512 00:15:25.463 verify_state_save=0 00:15:25.463 do_verify=1 00:15:25.463 verify=crc32c-intel 00:15:25.463 [job0] 00:15:25.463 filename=/dev/nvme0n1 00:15:25.463 [job1] 00:15:25.463 filename=/dev/nvme0n2 00:15:25.463 [job2] 00:15:25.463 filename=/dev/nvme0n3 00:15:25.463 [job3] 00:15:25.463 filename=/dev/nvme0n4 00:15:25.463 Could not set queue depth (nvme0n1) 00:15:25.463 Could not set queue depth (nvme0n2) 00:15:25.463 Could not set queue depth (nvme0n3) 00:15:25.463 Could not set queue depth (nvme0n4) 00:15:25.463 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:25.463 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:25.463 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:25.463 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:25.463 fio-3.35 00:15:25.463 Starting 4 threads 00:15:26.397 00:15:26.397 job0: (groupid=0, jobs=1): err= 0: pid=72337: Mon Jul 22 18:22:38 2024 00:15:26.397 read: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec) 00:15:26.397 slat (usec): min=4, max=12460, avg=146.51, stdev=843.25 00:15:26.397 clat (usec): min=8499, max=41832, avg=19114.83, stdev=7923.95 00:15:26.397 lat (usec): min=8518, max=42295, avg=19261.34, stdev=7976.94 00:15:26.397 clat percentiles (usec): 00:15:26.397 | 1.00th=[ 9241], 5.00th=[10421], 10.00th=[12911], 20.00th=[13304], 00:15:26.397 | 30.00th=[13566], 40.00th=[13829], 50.00th=[14222], 60.00th=[15401], 00:15:26.397 | 70.00th=[28181], 80.00th=[29754], 90.00th=[30802], 95.00th=[31327], 00:15:26.397 | 99.00th=[32900], 99.50th=[38536], 99.90th=[38536], 99.95th=[41157], 00:15:26.397 | 99.99th=[41681] 00:15:26.397 write: IOPS=3177, BW=12.4MiB/s (13.0MB/s)(12.5MiB/1006msec); 0 zone resets 00:15:26.397 slat (usec): min=9, max=21046, avg=164.61, stdev=1166.44 00:15:26.397 clat (usec): min=5698, max=50719, avg=21143.76, stdev=7830.28 00:15:26.397 lat (usec): min=5719, max=50753, avg=21308.37, stdev=7953.63 00:15:26.397 clat percentiles (usec): 00:15:26.397 | 1.00th=[ 8225], 5.00th=[11600], 10.00th=[13829], 20.00th=[14615], 00:15:26.397 | 30.00th=[14746], 40.00th=[15270], 50.00th=[16581], 60.00th=[26346], 00:15:26.397 | 70.00th=[28705], 80.00th=[29230], 90.00th=[29754], 95.00th=[33817], 00:15:26.397 | 99.00th=[36439], 99.50th=[39060], 99.90th=[42730], 99.95th=[49546], 00:15:26.397 | 99.99th=[50594] 00:15:26.397 bw ( KiB/s): min= 8525, max=16040, per=22.85%, avg=12282.50, stdev=5313.91, samples=2 00:15:26.397 iops : min= 2131, max= 4010, avg=3070.50, stdev=1328.65, samples=2 00:15:26.397 lat (msec) : 10=2.87%, 20=57.46%, 50=39.66%, 100=0.02% 00:15:26.397 cpu : usr=3.08%, sys=7.46%, ctx=359, majf=0, minf=17 00:15:26.397 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:15:26.397 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:26.397 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:26.397 issued rwts: total=3072,3197,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:26.397 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:26.397 job1: (groupid=0, jobs=1): err= 0: pid=72338: Mon Jul 22 18:22:38 2024 00:15:26.397 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:15:26.397 slat (usec): min=4, max=12179, avg=105.03, stdev=704.41 00:15:26.397 clat (usec): min=4194, max=25487, avg=14431.64, stdev=2395.21 00:15:26.397 lat (usec): min=4226, max=27106, avg=14536.68, stdev=2420.57 00:15:26.397 clat percentiles (usec): 00:15:26.397 | 1.00th=[ 8455], 5.00th=[ 9765], 10.00th=[12518], 20.00th=[13698], 00:15:26.397 | 30.00th=[13960], 40.00th=[14091], 50.00th=[14222], 60.00th=[14615], 00:15:26.397 | 70.00th=[14746], 80.00th=[15008], 90.00th=[15795], 95.00th=[19268], 00:15:26.397 | 99.00th=[23200], 99.50th=[23987], 99.90th=[25560], 99.95th=[25560], 00:15:26.397 | 99.99th=[25560] 00:15:26.397 write: IOPS=4678, BW=18.3MiB/s (19.2MB/s)(18.3MiB/1002msec); 0 zone resets 00:15:26.397 slat (usec): min=4, max=9654, avg=103.12, stdev=613.86 00:15:26.397 clat (usec): min=1044, max=25476, avg=12905.00, stdev=1924.48 00:15:26.397 lat (usec): min=2956, max=25483, avg=13008.12, stdev=1848.28 00:15:26.397 clat percentiles (usec): 00:15:26.397 | 1.00th=[ 6063], 5.00th=[ 8717], 10.00th=[11731], 20.00th=[12256], 00:15:26.397 | 30.00th=[12518], 40.00th=[12911], 50.00th=[13042], 60.00th=[13173], 00:15:26.397 | 70.00th=[13566], 80.00th=[13829], 90.00th=[14615], 95.00th=[15401], 00:15:26.397 | 99.00th=[17695], 99.50th=[17957], 99.90th=[17957], 99.95th=[17957], 00:15:26.397 | 99.99th=[25560] 00:15:26.397 bw ( KiB/s): min=20398, max=20398, per=37.95%, avg=20398.00, stdev= 0.00, samples=1 00:15:26.397 iops : min= 5099, max= 5099, avg=5099.00, stdev= 0.00, samples=1 00:15:26.397 lat (msec) : 2=0.01%, 4=0.24%, 10=5.80%, 20=92.19%, 50=1.76% 00:15:26.397 cpu : usr=3.90%, sys=10.99%, ctx=256, majf=0, minf=11 00:15:26.397 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:15:26.397 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:26.397 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:26.397 issued rwts: total=4608,4688,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:26.397 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:26.398 job2: (groupid=0, jobs=1): err= 0: pid=72339: Mon Jul 22 18:22:38 2024 00:15:26.398 read: IOPS=2853, BW=11.1MiB/s (11.7MB/s)(11.2MiB/1003msec) 00:15:26.398 slat (usec): min=6, max=18663, avg=189.03, stdev=1218.86 00:15:26.398 clat (usec): min=2480, max=73988, avg=24438.04, stdev=15933.92 00:15:26.398 lat (usec): min=2492, max=74011, avg=24627.07, stdev=16000.30 00:15:26.398 clat percentiles (usec): 00:15:26.398 | 1.00th=[12256], 5.00th=[15139], 10.00th=[15401], 20.00th=[15664], 00:15:26.398 | 30.00th=[15795], 40.00th=[15926], 50.00th=[16188], 60.00th=[16319], 00:15:26.398 | 70.00th=[16581], 80.00th=[36963], 90.00th=[52691], 95.00th=[63177], 00:15:26.398 | 99.00th=[73925], 99.50th=[73925], 99.90th=[73925], 99.95th=[73925], 00:15:26.398 | 99.99th=[73925] 00:15:26.398 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:15:26.398 slat (usec): min=9, max=18230, avg=142.05, stdev=754.85 00:15:26.398 clat (usec): min=11552, max=33803, avg=18390.28, stdev=4952.37 00:15:26.398 lat (usec): min=13703, max=41843, avg=18532.33, stdev=4944.14 00:15:26.398 clat percentiles (usec): 00:15:26.398 | 1.00th=[12518], 5.00th=[14746], 10.00th=[14877], 20.00th=[15270], 00:15:26.398 | 30.00th=[15533], 40.00th=[15664], 50.00th=[15795], 60.00th=[16057], 00:15:26.398 | 70.00th=[18744], 80.00th=[21103], 90.00th=[26608], 95.00th=[30540], 00:15:26.398 | 99.00th=[33424], 99.50th=[33424], 99.90th=[33817], 99.95th=[33817], 00:15:26.398 | 99.99th=[33817] 00:15:26.398 bw ( KiB/s): min=16286, max=16286, per=30.30%, avg=16286.00, stdev= 0.00, samples=1 00:15:26.398 iops : min= 4071, max= 4071, avg=4071.00, stdev= 0.00, samples=1 00:15:26.398 lat (msec) : 4=0.24%, 20=74.54%, 50=19.90%, 100=5.33% 00:15:26.398 cpu : usr=2.00%, sys=8.38%, ctx=187, majf=0, minf=6 00:15:26.398 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:15:26.398 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:26.398 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:26.398 issued rwts: total=2862,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:26.398 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:26.398 job3: (groupid=0, jobs=1): err= 0: pid=72340: Mon Jul 22 18:22:38 2024 00:15:26.398 read: IOPS=2103, BW=8414KiB/s (8616kB/s)(8448KiB/1004msec) 00:15:26.398 slat (usec): min=4, max=22976, avg=231.54, stdev=1412.25 00:15:26.398 clat (usec): min=2711, max=48885, avg=30505.07, stdev=5817.67 00:15:26.398 lat (usec): min=5412, max=53995, avg=30736.61, stdev=5866.59 00:15:26.398 clat percentiles (usec): 00:15:26.398 | 1.00th=[ 5735], 5.00th=[21890], 10.00th=[25035], 20.00th=[27132], 00:15:26.398 | 30.00th=[28443], 40.00th=[29754], 50.00th=[30016], 60.00th=[31065], 00:15:26.398 | 70.00th=[31851], 80.00th=[34866], 90.00th=[39060], 95.00th=[39584], 00:15:26.398 | 99.00th=[42730], 99.50th=[42730], 99.90th=[46400], 99.95th=[47973], 00:15:26.398 | 99.99th=[49021] 00:15:26.398 write: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec); 0 zone resets 00:15:26.398 slat (usec): min=6, max=28177, avg=193.11, stdev=1434.12 00:15:26.398 clat (usec): min=11803, max=50978, avg=24518.01, stdev=5801.95 00:15:26.398 lat (usec): min=11820, max=51025, avg=24711.11, stdev=5803.22 00:15:26.398 clat percentiles (usec): 00:15:26.398 | 1.00th=[15533], 5.00th=[16581], 10.00th=[17957], 20.00th=[18744], 00:15:26.398 | 30.00th=[19268], 40.00th=[20055], 50.00th=[23725], 60.00th=[28443], 00:15:26.398 | 70.00th=[28967], 80.00th=[29230], 90.00th=[31065], 95.00th=[34866], 00:15:26.398 | 99.00th=[35914], 99.50th=[35914], 99.90th=[38011], 99.95th=[44303], 00:15:26.398 | 99.99th=[51119] 00:15:26.398 bw ( KiB/s): min= 9137, max= 9137, per=17.00%, avg=9137.00, stdev= 0.00, samples=1 00:15:26.398 iops : min= 2284, max= 2284, avg=2284.00, stdev= 0.00, samples=1 00:15:26.398 lat (msec) : 4=0.02%, 10=0.47%, 20=22.95%, 50=76.54%, 100=0.02% 00:15:26.398 cpu : usr=2.09%, sys=5.58%, ctx=132, majf=0, minf=9 00:15:26.398 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:15:26.398 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:26.398 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:26.398 issued rwts: total=2112,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:26.398 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:26.398 00:15:26.398 Run status group 0 (all jobs): 00:15:26.398 READ: bw=49.1MiB/s (51.5MB/s), 8414KiB/s-18.0MiB/s (8616kB/s-18.8MB/s), io=49.4MiB (51.8MB), run=1002-1006msec 00:15:26.398 WRITE: bw=52.5MiB/s (55.0MB/s), 9.96MiB/s-18.3MiB/s (10.4MB/s-19.2MB/s), io=52.8MiB (55.4MB), run=1002-1006msec 00:15:26.398 00:15:26.398 Disk stats (read/write): 00:15:26.398 nvme0n1: ios=2412/2560, merge=0/0, ticks=23915/27559, in_queue=51474, util=89.78% 00:15:26.398 nvme0n2: ios=3834/4096, merge=0/0, ticks=51746/50099, in_queue=101845, util=88.15% 00:15:26.398 nvme0n3: ios=2560/2944, merge=0/0, ticks=13206/11799, in_queue=25005, util=89.00% 00:15:26.398 nvme0n4: ios=1780/2048, merge=0/0, ticks=42861/44406, in_queue=87267, util=89.56% 00:15:26.655 18:22:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:15:26.655 18:22:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=72353 00:15:26.655 18:22:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:15:26.655 18:22:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:15:26.655 [global] 00:15:26.655 thread=1 00:15:26.655 invalidate=1 00:15:26.655 rw=read 00:15:26.655 time_based=1 00:15:26.655 runtime=10 00:15:26.655 ioengine=libaio 00:15:26.655 direct=1 00:15:26.655 bs=4096 00:15:26.655 iodepth=1 00:15:26.655 norandommap=1 00:15:26.655 numjobs=1 00:15:26.655 00:15:26.655 [job0] 00:15:26.655 filename=/dev/nvme0n1 00:15:26.655 [job1] 00:15:26.655 filename=/dev/nvme0n2 00:15:26.655 [job2] 00:15:26.655 filename=/dev/nvme0n3 00:15:26.655 [job3] 00:15:26.655 filename=/dev/nvme0n4 00:15:26.655 Could not set queue depth (nvme0n1) 00:15:26.655 Could not set queue depth (nvme0n2) 00:15:26.655 Could not set queue depth (nvme0n3) 00:15:26.655 Could not set queue depth (nvme0n4) 00:15:26.655 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:26.655 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:26.656 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:26.656 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:26.656 fio-3.35 00:15:26.656 Starting 4 threads 00:15:29.934 18:22:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:15:29.934 fio: pid=72396, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:29.934 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=49025024, buflen=4096 00:15:29.934 18:22:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:15:30.191 fio: pid=72395, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:30.191 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=53899264, buflen=4096 00:15:30.191 18:22:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:30.191 18:22:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:15:30.449 fio: pid=72393, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:30.449 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=40681472, buflen=4096 00:15:30.449 18:22:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:30.449 18:22:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:15:30.707 fio: pid=72394, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:30.707 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=47775744, buflen=4096 00:15:30.707 00:15:30.707 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=72393: Mon Jul 22 18:22:42 2024 00:15:30.707 read: IOPS=2843, BW=11.1MiB/s (11.6MB/s)(38.8MiB/3493msec) 00:15:30.707 slat (usec): min=8, max=10722, avg=17.58, stdev=165.66 00:15:30.707 clat (usec): min=179, max=2705, avg=332.38, stdev=72.27 00:15:30.707 lat (usec): min=192, max=11006, avg=349.96, stdev=181.51 00:15:30.707 clat percentiles (usec): 00:15:30.707 | 1.00th=[ 196], 5.00th=[ 273], 10.00th=[ 281], 20.00th=[ 289], 00:15:30.707 | 30.00th=[ 297], 40.00th=[ 302], 50.00th=[ 306], 60.00th=[ 318], 00:15:30.707 | 70.00th=[ 334], 80.00th=[ 383], 90.00th=[ 445], 95.00th=[ 474], 00:15:30.707 | 99.00th=[ 523], 99.50th=[ 545], 99.90th=[ 799], 99.95th=[ 955], 00:15:30.707 | 99.99th=[ 2704] 00:15:30.707 bw ( KiB/s): min= 8816, max=12936, per=23.24%, avg=11148.67, stdev=1815.09, samples=6 00:15:30.707 iops : min= 2204, max= 3234, avg=2787.17, stdev=453.77, samples=6 00:15:30.707 lat (usec) : 250=2.22%, 500=95.75%, 750=1.87%, 1000=0.12% 00:15:30.707 lat (msec) : 2=0.01%, 4=0.01% 00:15:30.707 cpu : usr=1.00%, sys=3.75%, ctx=9950, majf=0, minf=1 00:15:30.707 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:30.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:30.707 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:30.707 issued rwts: total=9933,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:30.707 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:30.708 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=72394: Mon Jul 22 18:22:42 2024 00:15:30.708 read: IOPS=2994, BW=11.7MiB/s (12.3MB/s)(45.6MiB/3896msec) 00:15:30.708 slat (usec): min=8, max=13538, avg=22.68, stdev=241.19 00:15:30.708 clat (usec): min=3, max=3970, avg=309.19, stdev=97.10 00:15:30.708 lat (usec): min=183, max=13846, avg=331.87, stdev=261.03 00:15:30.708 clat percentiles (usec): 00:15:30.708 | 1.00th=[ 180], 5.00th=[ 186], 10.00th=[ 194], 20.00th=[ 273], 00:15:30.708 | 30.00th=[ 285], 40.00th=[ 293], 50.00th=[ 297], 60.00th=[ 306], 00:15:30.708 | 70.00th=[ 318], 80.00th=[ 359], 90.00th=[ 429], 95.00th=[ 465], 00:15:30.708 | 99.00th=[ 510], 99.50th=[ 553], 99.90th=[ 824], 99.95th=[ 1156], 00:15:30.708 | 99.99th=[ 3261] 00:15:30.708 bw ( KiB/s): min= 8512, max=13441, per=23.73%, avg=11382.71, stdev=1910.64, samples=7 00:15:30.708 iops : min= 2128, max= 3360, avg=2845.57, stdev=477.58, samples=7 00:15:30.708 lat (usec) : 4=0.01%, 250=16.85%, 500=81.73%, 750=1.26%, 1000=0.08% 00:15:30.708 lat (msec) : 2=0.03%, 4=0.03% 00:15:30.708 cpu : usr=1.36%, sys=4.96%, ctx=11684, majf=0, minf=1 00:15:30.708 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:30.708 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:30.708 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:30.708 issued rwts: total=11665,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:30.708 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:30.708 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=72395: Mon Jul 22 18:22:42 2024 00:15:30.708 read: IOPS=4022, BW=15.7MiB/s (16.5MB/s)(51.4MiB/3272msec) 00:15:30.708 slat (usec): min=11, max=10988, avg=19.39, stdev=122.40 00:15:30.708 clat (usec): min=182, max=2651, avg=227.33, stdev=56.24 00:15:30.708 lat (usec): min=196, max=11247, avg=246.72, stdev=135.91 00:15:30.708 clat percentiles (usec): 00:15:30.708 | 1.00th=[ 190], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 200], 00:15:30.708 | 30.00th=[ 204], 40.00th=[ 208], 50.00th=[ 215], 60.00th=[ 221], 00:15:30.708 | 70.00th=[ 229], 80.00th=[ 239], 90.00th=[ 285], 95.00th=[ 330], 00:15:30.708 | 99.00th=[ 367], 99.50th=[ 375], 99.90th=[ 437], 99.95th=[ 725], 00:15:30.708 | 99.99th=[ 2606] 00:15:30.708 bw ( KiB/s): min=13712, max=18200, per=33.80%, avg=16216.67, stdev=2059.59, samples=6 00:15:30.708 iops : min= 3428, max= 4550, avg=4054.17, stdev=514.90, samples=6 00:15:30.708 lat (usec) : 250=84.40%, 500=15.51%, 750=0.04% 00:15:30.708 lat (msec) : 2=0.02%, 4=0.02% 00:15:30.708 cpu : usr=1.62%, sys=6.39%, ctx=13162, majf=0, minf=1 00:15:30.708 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:30.708 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:30.708 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:30.708 issued rwts: total=13160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:30.708 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:30.708 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=72396: Mon Jul 22 18:22:42 2024 00:15:30.708 read: IOPS=4048, BW=15.8MiB/s (16.6MB/s)(46.8MiB/2957msec) 00:15:30.708 slat (nsec): min=12136, max=89152, avg=18397.19, stdev=6712.92 00:15:30.708 clat (usec): min=181, max=2535, avg=226.48, stdev=47.19 00:15:30.708 lat (usec): min=195, max=2565, avg=244.88, stdev=50.03 00:15:30.708 clat percentiles (usec): 00:15:30.708 | 1.00th=[ 188], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 200], 00:15:30.708 | 30.00th=[ 206], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 223], 00:15:30.708 | 70.00th=[ 231], 80.00th=[ 243], 90.00th=[ 269], 95.00th=[ 297], 00:15:30.708 | 99.00th=[ 359], 99.50th=[ 379], 99.90th=[ 570], 99.95th=[ 758], 00:15:30.708 | 99.99th=[ 2311] 00:15:30.708 bw ( KiB/s): min=13728, max=17536, per=33.17%, avg=15910.40, stdev=1678.81, samples=5 00:15:30.708 iops : min= 3432, max= 4384, avg=3977.60, stdev=419.70, samples=5 00:15:30.708 lat (usec) : 250=84.06%, 500=15.81%, 750=0.07%, 1000=0.04% 00:15:30.708 lat (msec) : 4=0.02% 00:15:30.708 cpu : usr=1.73%, sys=6.60%, ctx=11971, majf=0, minf=1 00:15:30.708 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:30.708 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:30.708 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:30.708 issued rwts: total=11970,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:30.708 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:30.708 00:15:30.708 Run status group 0 (all jobs): 00:15:30.708 READ: bw=46.8MiB/s (49.1MB/s), 11.1MiB/s-15.8MiB/s (11.6MB/s-16.6MB/s), io=183MiB (191MB), run=2957-3896msec 00:15:30.708 00:15:30.708 Disk stats (read/write): 00:15:30.708 nvme0n1: ios=9490/0, merge=0/0, ticks=3016/0, in_queue=3016, util=95.36% 00:15:30.708 nvme0n2: ios=11521/0, merge=0/0, ticks=3559/0, in_queue=3559, util=95.36% 00:15:30.708 nvme0n3: ios=12533/0, merge=0/0, ticks=2875/0, in_queue=2875, util=96.27% 00:15:30.708 nvme0n4: ios=11564/0, merge=0/0, ticks=2662/0, in_queue=2662, util=96.76% 00:15:30.965 18:22:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:30.965 18:22:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:15:31.222 18:22:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:31.222 18:22:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:15:31.799 18:22:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:31.799 18:22:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:15:32.365 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:32.365 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:15:32.622 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:32.622 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:15:33.188 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:15:33.188 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 72353 00:15:33.188 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:15:33.188 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:33.188 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:33.188 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:33.188 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:15:33.188 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:33.188 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:33.188 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:33.188 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:33.188 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:15:33.188 nvmf hotplug test: fio failed as expected 00:15:33.188 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:15:33.188 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:15:33.188 18:22:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:33.446 18:22:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:15:33.446 18:22:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:15:33.446 18:22:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:15:33.447 18:22:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:15:33.447 18:22:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:15:33.447 18:22:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:33.447 18:22:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:15:33.447 18:22:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:33.447 18:22:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:15:33.447 18:22:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:33.447 18:22:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:33.447 rmmod nvme_tcp 00:15:33.447 rmmod nvme_fabrics 00:15:33.447 rmmod nvme_keyring 00:15:33.447 18:22:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:33.447 18:22:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:15:33.447 18:22:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:15:33.447 18:22:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 71970 ']' 00:15:33.447 18:22:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 71970 00:15:33.447 18:22:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 71970 ']' 00:15:33.447 18:22:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 71970 00:15:33.447 18:22:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:15:33.447 18:22:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:33.447 18:22:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71970 00:15:33.447 killing process with pid 71970 00:15:33.447 18:22:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:33.447 18:22:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:33.447 18:22:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71970' 00:15:33.447 18:22:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 71970 00:15:33.447 18:22:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 71970 00:15:34.821 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:34.822 ************************************ 00:15:34.822 END TEST nvmf_fio_target 00:15:34.822 ************************************ 00:15:34.822 00:15:34.822 real 0m21.940s 00:15:34.822 user 1m20.689s 00:15:34.822 sys 0m10.658s 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:15:34.822 ************************************ 00:15:34.822 START TEST nvmf_bdevio 00:15:34.822 ************************************ 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:34.822 * Looking for test storage... 00:15:34.822 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=1e224894-a0fc-4112-b81b-a37606f50c96 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:34.822 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:34.823 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:34.823 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:34.823 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:34.823 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:34.823 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:34.823 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:34.823 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:34.823 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:34.823 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:34.823 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:34.823 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:34.823 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:34.823 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:34.823 Cannot find device "nvmf_tgt_br" 00:15:34.823 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:15:34.823 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:35.081 Cannot find device "nvmf_tgt_br2" 00:15:35.081 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:15:35.081 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:35.081 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:35.081 Cannot find device "nvmf_tgt_br" 00:15:35.081 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:15:35.081 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:35.081 Cannot find device "nvmf_tgt_br2" 00:15:35.081 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:15:35.081 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:35.081 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:35.081 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:35.081 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:35.081 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:15:35.081 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:35.081 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:35.081 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:15:35.081 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:35.081 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:35.081 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:35.081 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:35.081 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:35.081 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:35.081 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:35.081 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:35.081 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:35.081 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:35.081 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:35.081 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:35.081 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:35.081 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:35.081 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:35.081 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:35.081 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:35.081 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:35.081 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:35.081 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:35.081 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:35.081 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:35.340 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:35.340 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:35.340 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:35.340 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.118 ms 00:15:35.340 00:15:35.340 --- 10.0.0.2 ping statistics --- 00:15:35.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.340 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:15:35.340 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:35.340 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:35.340 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:15:35.340 00:15:35.340 --- 10.0.0.3 ping statistics --- 00:15:35.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.340 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:15:35.340 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:35.340 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:35.340 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:15:35.340 00:15:35.340 --- 10.0.0.1 ping statistics --- 00:15:35.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.340 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:15:35.340 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:35.340 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:15:35.340 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:35.340 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:35.340 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:35.340 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:35.340 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:35.340 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:35.340 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:35.340 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:35.340 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:35.340 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:35.340 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:35.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:35.340 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=72676 00:15:35.340 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:15:35.340 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 72676 00:15:35.340 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 72676 ']' 00:15:35.340 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.340 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:35.340 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.340 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:35.340 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:35.340 [2024-07-22 18:22:47.249200] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:15:35.340 [2024-07-22 18:22:47.249601] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:35.599 [2024-07-22 18:22:47.419737] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:35.857 [2024-07-22 18:22:47.694221] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:35.857 [2024-07-22 18:22:47.694719] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:35.857 [2024-07-22 18:22:47.695153] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:35.857 [2024-07-22 18:22:47.695558] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:35.857 [2024-07-22 18:22:47.695781] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:35.857 [2024-07-22 18:22:47.695996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:15:35.857 [2024-07-22 18:22:47.696241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:15:35.857 [2024-07-22 18:22:47.696446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:35.857 [2024-07-22 18:22:47.696454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:15:36.115 [2024-07-22 18:22:47.902662] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:36.373 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:36.374 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:15:36.374 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:36.374 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:36.374 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:36.374 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:36.374 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:36.374 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.374 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:36.374 [2024-07-22 18:22:48.195242] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:36.374 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.374 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:36.374 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.374 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:36.374 Malloc0 00:15:36.374 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.374 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:36.374 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.374 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:36.374 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.374 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:36.374 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.374 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:36.374 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.374 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:36.374 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.374 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:36.374 [2024-07-22 18:22:48.304699] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:36.374 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.374 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:15:36.374 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:36.374 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:15:36.374 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:15:36.374 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:36.374 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:36.374 { 00:15:36.374 "params": { 00:15:36.374 "name": "Nvme$subsystem", 00:15:36.374 "trtype": "$TEST_TRANSPORT", 00:15:36.374 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:36.374 "adrfam": "ipv4", 00:15:36.374 "trsvcid": "$NVMF_PORT", 00:15:36.374 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:36.374 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:36.374 "hdgst": ${hdgst:-false}, 00:15:36.374 "ddgst": ${ddgst:-false} 00:15:36.374 }, 00:15:36.374 "method": "bdev_nvme_attach_controller" 00:15:36.374 } 00:15:36.374 EOF 00:15:36.374 )") 00:15:36.374 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:15:36.374 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:15:36.374 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:15:36.374 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:36.374 "params": { 00:15:36.374 "name": "Nvme1", 00:15:36.374 "trtype": "tcp", 00:15:36.374 "traddr": "10.0.0.2", 00:15:36.374 "adrfam": "ipv4", 00:15:36.374 "trsvcid": "4420", 00:15:36.374 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:36.374 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:36.374 "hdgst": false, 00:15:36.374 "ddgst": false 00:15:36.374 }, 00:15:36.374 "method": "bdev_nvme_attach_controller" 00:15:36.374 }' 00:15:36.633 [2024-07-22 18:22:48.453379] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:15:36.633 [2024-07-22 18:22:48.453598] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72718 ] 00:15:36.633 [2024-07-22 18:22:48.633830] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:37.200 [2024-07-22 18:22:48.937021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:37.200 [2024-07-22 18:22:48.937162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:37.200 [2024-07-22 18:22:48.937174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:37.200 [2024-07-22 18:22:49.154264] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:37.458 I/O targets: 00:15:37.458 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:37.458 00:15:37.459 00:15:37.459 CUnit - A unit testing framework for C - Version 2.1-3 00:15:37.459 http://cunit.sourceforge.net/ 00:15:37.459 00:15:37.459 00:15:37.459 Suite: bdevio tests on: Nvme1n1 00:15:37.459 Test: blockdev write read block ...passed 00:15:37.459 Test: blockdev write zeroes read block ...passed 00:15:37.459 Test: blockdev write zeroes read no split ...passed 00:15:37.459 Test: blockdev write zeroes read split ...passed 00:15:37.459 Test: blockdev write zeroes read split partial ...passed 00:15:37.459 Test: blockdev reset ...[2024-07-22 18:22:49.436475] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:37.459 [2024-07-22 18:22:49.436989] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b280 (9): Bad file descriptor 00:15:37.459 [2024-07-22 18:22:49.452116] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:37.459 passed 00:15:37.459 Test: blockdev write read 8 blocks ...passed 00:15:37.459 Test: blockdev write read size > 128k ...passed 00:15:37.459 Test: blockdev write read invalid size ...passed 00:15:37.459 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:37.459 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:37.459 Test: blockdev write read max offset ...passed 00:15:37.459 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:37.459 Test: blockdev writev readv 8 blocks ...passed 00:15:37.459 Test: blockdev writev readv 30 x 1block ...passed 00:15:37.459 Test: blockdev writev readv block ...passed 00:15:37.459 Test: blockdev writev readv size > 128k ...passed 00:15:37.459 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:37.459 Test: blockdev comparev and writev ...[2024-07-22 18:22:49.464916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:37.459 [2024-07-22 18:22:49.464992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:37.459 [2024-07-22 18:22:49.465029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:37.459 [2024-07-22 18:22:49.465054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:37.459 [2024-07-22 18:22:49.465464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:37.459 [2024-07-22 18:22:49.465516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:37.459 [2024-07-22 18:22:49.465548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:37.459 [2024-07-22 18:22:49.465569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:37.459 [2024-07-22 18:22:49.465977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:37.459 [2024-07-22 18:22:49.466017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:37.459 [2024-07-22 18:22:49.466046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:37.459 [2024-07-22 18:22:49.466070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:37.459 [2024-07-22 18:22:49.466459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:37.459 [2024-07-22 18:22:49.466497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:37.459 [2024-07-22 18:22:49.466525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:37.459 [2024-07-22 18:22:49.466544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:37.459 passed 00:15:37.459 Test: blockdev nvme passthru rw ...passed 00:15:37.459 Test: blockdev nvme passthru vendor specific ...passed 00:15:37.459 Test: blockdev nvme admin passthru ...[2024-07-22 18:22:49.467601] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:37.459 [2024-07-22 18:22:49.467651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:37.459 [2024-07-22 18:22:49.467813] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:37.459 [2024-07-22 18:22:49.467850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:37.459 [2024-07-22 18:22:49.468008] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:37.459 [2024-07-22 18:22:49.468044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:37.459 [2024-07-22 18:22:49.468187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:37.459 [2024-07-22 18:22:49.468236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:37.717 passed 00:15:37.717 Test: blockdev copy ...passed 00:15:37.717 00:15:37.717 Run Summary: Type Total Ran Passed Failed Inactive 00:15:37.717 suites 1 1 n/a 0 0 00:15:37.717 tests 23 23 23 0 0 00:15:37.717 asserts 152 152 152 0 n/a 00:15:37.717 00:15:37.717 Elapsed time = 0.309 seconds 00:15:38.651 18:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:38.651 18:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.651 18:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:38.651 18:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.651 18:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:38.651 18:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:15:38.651 18:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:38.651 18:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:15:38.651 18:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:38.651 18:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:15:38.651 18:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:38.651 18:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:38.651 rmmod nvme_tcp 00:15:38.651 rmmod nvme_fabrics 00:15:38.651 rmmod nvme_keyring 00:15:38.910 18:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:38.910 18:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:15:38.910 18:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:15:38.910 18:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 72676 ']' 00:15:38.910 18:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 72676 00:15:38.910 18:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 72676 ']' 00:15:38.910 18:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 72676 00:15:38.910 18:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:15:38.910 18:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:38.910 18:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72676 00:15:38.910 killing process with pid 72676 00:15:38.910 18:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:15:38.910 18:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:15:38.910 18:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72676' 00:15:38.910 18:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 72676 00:15:38.910 18:22:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 72676 00:15:40.286 18:22:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:40.287 18:22:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:40.287 18:22:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:40.287 18:22:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:40.287 18:22:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:40.287 18:22:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:40.287 18:22:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:40.287 18:22:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:40.287 18:22:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:40.287 00:15:40.287 real 0m5.441s 00:15:40.287 user 0m20.496s 00:15:40.287 sys 0m1.115s 00:15:40.287 ************************************ 00:15:40.287 END TEST nvmf_bdevio 00:15:40.287 ************************************ 00:15:40.287 18:22:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:40.287 18:22:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:40.287 18:22:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:15:40.287 18:22:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:15:40.287 ************************************ 00:15:40.287 END TEST nvmf_target_core 00:15:40.287 ************************************ 00:15:40.287 00:15:40.287 real 2m57.363s 00:15:40.287 user 7m58.298s 00:15:40.287 sys 0m52.504s 00:15:40.287 18:22:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:40.287 18:22:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:15:40.287 18:22:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:40.287 18:22:52 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:15:40.287 18:22:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:40.287 18:22:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:40.287 18:22:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:40.287 ************************************ 00:15:40.287 START TEST nvmf_target_extra 00:15:40.287 ************************************ 00:15:40.287 18:22:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:15:40.287 * Looking for test storage... 00:15:40.287 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:15:40.287 18:22:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:40.287 18:22:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:15:40.287 18:22:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:40.287 18:22:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:40.287 18:22:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:40.287 18:22:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:40.287 18:22:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:40.287 18:22:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:40.287 18:22:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:40.287 18:22:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:40.287 18:22:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:40.287 18:22:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:40.546 18:22:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:15:40.546 18:22:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=1e224894-a0fc-4112-b81b-a37606f50c96 00:15:40.546 18:22:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:40.547 18:22:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:40.547 18:22:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:40.547 18:22:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:40.547 18:22:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:40.547 18:22:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:40.547 18:22:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:40.547 18:22:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:40.547 18:22:52 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.547 18:22:52 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.547 18:22:52 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.547 18:22:52 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:15:40.547 18:22:52 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.547 18:22:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:15:40.547 18:22:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:40.547 18:22:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:40.547 18:22:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:40.547 18:22:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:40.547 18:22:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:40.547 18:22:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:40.547 18:22:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:40.547 18:22:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:40.547 18:22:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:15:40.547 18:22:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:15:40.547 18:22:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:15:40.547 18:22:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:40.547 18:22:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:40.547 18:22:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:40.547 18:22:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:40.547 ************************************ 00:15:40.547 START TEST nvmf_auth_target 00:15:40.547 ************************************ 00:15:40.547 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:40.547 * Looking for test storage... 00:15:40.547 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:40.547 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:40.547 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:40.547 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:40.547 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:40.547 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:40.547 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:40.547 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:40.547 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:40.547 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:40.547 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:40.547 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:40.547 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:40.547 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:15:40.547 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=1e224894-a0fc-4112-b81b-a37606f50c96 00:15:40.547 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:40.547 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:40.547 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:40.547 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:40.547 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:40.547 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:40.547 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:40.547 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:40.547 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.547 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.547 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.547 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:40.547 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.547 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:15:40.547 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:40.547 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:40.547 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:40.548 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:40.548 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:40.548 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:40.548 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:40.548 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:40.548 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:40.548 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:40.548 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:40.548 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:15:40.548 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:40.548 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:40.548 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:15:40.548 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:15:40.548 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:40.548 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:40.548 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:40.548 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:40.548 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:40.548 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:40.548 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:40.548 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:40.548 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:40.548 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:40.548 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:40.548 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:40.548 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:40.548 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:40.548 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:40.548 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:40.548 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:40.548 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:40.548 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:40.548 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:40.548 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:40.548 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:40.548 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:40.548 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:40.548 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:40.548 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:40.548 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:40.548 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:40.548 Cannot find device "nvmf_tgt_br" 00:15:40.548 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:15:40.548 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:40.548 Cannot find device "nvmf_tgt_br2" 00:15:40.548 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:15:40.548 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:40.548 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:40.548 Cannot find device "nvmf_tgt_br" 00:15:40.548 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:15:40.548 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:40.548 Cannot find device "nvmf_tgt_br2" 00:15:40.548 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:15:40.548 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:40.548 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:40.548 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:40.548 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:40.548 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:15:40.548 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:40.548 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:40.548 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:15:40.548 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:40.806 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:40.806 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:40.806 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:40.806 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:40.806 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:40.806 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:40.806 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:40.806 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:40.806 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:40.806 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:40.806 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:40.806 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:40.806 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:40.807 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:40.807 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:40.807 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:40.807 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:40.807 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:40.807 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:40.807 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:40.807 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:40.807 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:40.807 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:40.807 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:40.807 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:15:40.807 00:15:40.807 --- 10.0.0.2 ping statistics --- 00:15:40.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.807 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:15:40.807 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:40.807 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:40.807 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:15:40.807 00:15:40.807 --- 10.0.0.3 ping statistics --- 00:15:40.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.807 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:15:40.807 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:40.807 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:40.807 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:15:40.807 00:15:40.807 --- 10.0.0.1 ping statistics --- 00:15:40.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.807 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:15:40.807 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:40.807 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:15:40.807 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:40.807 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:40.807 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:40.807 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:40.807 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:40.807 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:40.807 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:40.807 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:15:40.807 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:40.807 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:40.807 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.807 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=72990 00:15:40.807 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:40.807 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 72990 00:15:40.807 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 72990 ']' 00:15:40.807 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:40.807 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:40.807 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:40.807 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:40.807 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.183 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:42.183 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:15:42.183 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:42.183 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:42.183 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.183 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:42.183 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=73022 00:15:42.183 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:42.183 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:42.183 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:15:42.183 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:42.183 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:42.183 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:42.183 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:15:42.183 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:42.183 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:42.183 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=edd8bd57f62c5b1fa56dd470384651cb958ac30687e25ad6 00:15:42.183 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:15:42.183 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.1zh 00:15:42.183 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key edd8bd57f62c5b1fa56dd470384651cb958ac30687e25ad6 0 00:15:42.183 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 edd8bd57f62c5b1fa56dd470384651cb958ac30687e25ad6 0 00:15:42.183 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:42.183 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:42.183 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=edd8bd57f62c5b1fa56dd470384651cb958ac30687e25ad6 00:15:42.183 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:15:42.183 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:42.183 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.1zh 00:15:42.183 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.1zh 00:15:42.183 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.1zh 00:15:42.183 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:15:42.183 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:42.183 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:42.183 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:42.183 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:15:42.183 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:15:42.183 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:42.184 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=5130ab1bc1d4c8e2569b25e40a9b7a80c92ace4964418849ef214f6e6d558f00 00:15:42.184 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:15:42.184 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.xfB 00:15:42.184 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 5130ab1bc1d4c8e2569b25e40a9b7a80c92ace4964418849ef214f6e6d558f00 3 00:15:42.184 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 5130ab1bc1d4c8e2569b25e40a9b7a80c92ace4964418849ef214f6e6d558f00 3 00:15:42.184 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:42.184 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:42.184 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=5130ab1bc1d4c8e2569b25e40a9b7a80c92ace4964418849ef214f6e6d558f00 00:15:42.184 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:15:42.184 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:42.184 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.xfB 00:15:42.184 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.xfB 00:15:42.184 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.xfB 00:15:42.184 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:15:42.184 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:42.184 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:42.184 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:42.184 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:15:42.184 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:15:42.184 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:42.184 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0cea3c3bf5d57bb973162709c9eab110 00:15:42.184 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:15:42.184 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Qo5 00:15:42.184 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0cea3c3bf5d57bb973162709c9eab110 1 00:15:42.184 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0cea3c3bf5d57bb973162709c9eab110 1 00:15:42.184 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:42.184 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:42.184 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0cea3c3bf5d57bb973162709c9eab110 00:15:42.184 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:15:42.184 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:42.184 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Qo5 00:15:42.184 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Qo5 00:15:42.184 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.Qo5 00:15:42.184 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:15:42.184 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:42.184 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:42.184 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:42.184 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:15:42.184 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:42.184 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:42.184 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=3d5576fcff70340567fe1b7f49e8f206c57104fc11f5bcc2 00:15:42.184 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:15:42.184 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.o0I 00:15:42.184 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 3d5576fcff70340567fe1b7f49e8f206c57104fc11f5bcc2 2 00:15:42.184 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 3d5576fcff70340567fe1b7f49e8f206c57104fc11f5bcc2 2 00:15:42.184 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:42.184 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:42.184 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=3d5576fcff70340567fe1b7f49e8f206c57104fc11f5bcc2 00:15:42.184 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:15:42.184 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:42.184 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.o0I 00:15:42.184 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.o0I 00:15:42.184 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.o0I 00:15:42.184 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:15:42.184 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:42.184 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:42.184 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:42.184 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:15:42.184 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:42.184 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:42.184 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=28620dd0a70f68b501f4619b0ef8670e573b61ae80ead3eb 00:15:42.184 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:15:42.184 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.oOw 00:15:42.184 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 28620dd0a70f68b501f4619b0ef8670e573b61ae80ead3eb 2 00:15:42.184 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 28620dd0a70f68b501f4619b0ef8670e573b61ae80ead3eb 2 00:15:42.184 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:42.184 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:42.184 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=28620dd0a70f68b501f4619b0ef8670e573b61ae80ead3eb 00:15:42.184 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:15:42.184 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:42.444 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.oOw 00:15:42.444 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.oOw 00:15:42.444 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.oOw 00:15:42.444 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:15:42.444 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:42.444 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:42.444 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:42.444 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:15:42.444 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:15:42.444 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:42.444 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=78b5159b3c687ab1971c23b6699d03ba 00:15:42.444 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:15:42.444 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.4W9 00:15:42.444 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 78b5159b3c687ab1971c23b6699d03ba 1 00:15:42.444 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 78b5159b3c687ab1971c23b6699d03ba 1 00:15:42.444 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:42.444 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:42.444 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=78b5159b3c687ab1971c23b6699d03ba 00:15:42.444 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:15:42.444 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:42.444 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.4W9 00:15:42.444 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.4W9 00:15:42.444 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.4W9 00:15:42.444 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:15:42.444 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:42.444 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:42.444 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:42.444 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:15:42.444 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:15:42.444 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:42.444 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=4171abd6563c8a27ae2b1e2029d0f8e50fdc5fe025992a8ac0b19266dc496d6b 00:15:42.444 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:15:42.444 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.IJ9 00:15:42.444 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 4171abd6563c8a27ae2b1e2029d0f8e50fdc5fe025992a8ac0b19266dc496d6b 3 00:15:42.444 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 4171abd6563c8a27ae2b1e2029d0f8e50fdc5fe025992a8ac0b19266dc496d6b 3 00:15:42.444 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:42.444 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:42.444 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=4171abd6563c8a27ae2b1e2029d0f8e50fdc5fe025992a8ac0b19266dc496d6b 00:15:42.444 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:15:42.444 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:42.444 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.IJ9 00:15:42.444 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.IJ9 00:15:42.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:42.444 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.IJ9 00:15:42.444 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:15:42.444 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 72990 00:15:42.444 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 72990 ']' 00:15:42.444 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:42.444 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:42.444 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:42.444 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:42.444 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:42.714 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:42.714 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:15:42.714 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 73022 /var/tmp/host.sock 00:15:42.714 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 73022 ']' 00:15:42.714 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:15:42.714 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:42.714 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:42.714 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:42.714 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.282 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:43.282 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:15:43.282 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:15:43.282 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.282 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.282 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.282 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:43.282 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.1zh 00:15:43.282 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.282 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.282 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.282 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.1zh 00:15:43.282 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.1zh 00:15:43.542 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.xfB ]] 00:15:43.542 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.xfB 00:15:43.542 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.542 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.542 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.542 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.xfB 00:15:43.542 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.xfB 00:15:43.800 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:43.800 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Qo5 00:15:43.800 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.800 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.800 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.800 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Qo5 00:15:43.800 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Qo5 00:15:44.058 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.o0I ]] 00:15:44.058 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.o0I 00:15:44.058 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.058 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.058 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.058 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.o0I 00:15:44.058 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.o0I 00:15:44.316 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:44.316 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.oOw 00:15:44.316 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.316 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.316 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.316 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.oOw 00:15:44.316 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.oOw 00:15:44.575 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.4W9 ]] 00:15:44.575 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.4W9 00:15:44.575 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.575 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.575 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.575 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.4W9 00:15:44.575 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.4W9 00:15:44.833 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:44.833 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.IJ9 00:15:44.833 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.833 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.833 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.833 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.IJ9 00:15:44.833 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.IJ9 00:15:45.091 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:15:45.091 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:15:45.092 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:45.092 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:45.092 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:45.092 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:45.350 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:15:45.351 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:45.351 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:45.351 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:45.351 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:45.351 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.351 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.351 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.351 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.351 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.351 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.351 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.610 00:15:45.610 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:45.610 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:45.610 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.868 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.868 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.868 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.868 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.868 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.868 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:45.868 { 00:15:45.868 "cntlid": 1, 00:15:45.868 "qid": 0, 00:15:45.868 "state": "enabled", 00:15:45.868 "thread": "nvmf_tgt_poll_group_000", 00:15:45.868 "listen_address": { 00:15:45.868 "trtype": "TCP", 00:15:45.868 "adrfam": "IPv4", 00:15:45.868 "traddr": "10.0.0.2", 00:15:45.868 "trsvcid": "4420" 00:15:45.868 }, 00:15:45.868 "peer_address": { 00:15:45.868 "trtype": "TCP", 00:15:45.868 "adrfam": "IPv4", 00:15:45.868 "traddr": "10.0.0.1", 00:15:45.868 "trsvcid": "45494" 00:15:45.868 }, 00:15:45.868 "auth": { 00:15:45.868 "state": "completed", 00:15:45.868 "digest": "sha256", 00:15:45.868 "dhgroup": "null" 00:15:45.868 } 00:15:45.868 } 00:15:45.868 ]' 00:15:45.868 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:46.126 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:46.126 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:46.126 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:46.126 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:46.126 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.126 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.126 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.385 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:00:ZWRkOGJkNTdmNjJjNWIxZmE1NmRkNDcwMzg0NjUxY2I5NThhYzMwNjg3ZTI1YWQ2si6sAA==: --dhchap-ctrl-secret DHHC-1:03:NTEzMGFiMWJjMWQ0YzhlMjU2OWIyNWU0MGE5YjdhODBjOTJhY2U0OTY0NDE4ODQ5ZWYyMTRmNmU2ZDU1OGYwMHskUL4=: 00:15:51.746 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.746 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.747 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:15:51.747 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.747 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.747 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.747 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:51.747 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:51.747 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:51.747 18:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:15:51.747 18:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:51.747 18:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:51.747 18:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:51.747 18:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:51.747 18:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.747 18:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.747 18:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.747 18:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.747 18:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.747 18:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.747 18:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.747 00:15:51.747 18:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:51.747 18:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:51.747 18:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.747 18:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.747 18:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.747 18:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.747 18:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.747 18:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.747 18:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:51.747 { 00:15:51.747 "cntlid": 3, 00:15:51.747 "qid": 0, 00:15:51.747 "state": "enabled", 00:15:51.747 "thread": "nvmf_tgt_poll_group_000", 00:15:51.747 "listen_address": { 00:15:51.747 "trtype": "TCP", 00:15:51.747 "adrfam": "IPv4", 00:15:51.747 "traddr": "10.0.0.2", 00:15:51.747 "trsvcid": "4420" 00:15:51.747 }, 00:15:51.747 "peer_address": { 00:15:51.747 "trtype": "TCP", 00:15:51.747 "adrfam": "IPv4", 00:15:51.747 "traddr": "10.0.0.1", 00:15:51.747 "trsvcid": "45520" 00:15:51.747 }, 00:15:51.747 "auth": { 00:15:51.747 "state": "completed", 00:15:51.747 "digest": "sha256", 00:15:51.747 "dhgroup": "null" 00:15:51.747 } 00:15:51.747 } 00:15:51.747 ]' 00:15:51.747 18:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:52.017 18:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:52.017 18:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:52.017 18:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:52.017 18:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:52.017 18:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.017 18:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.017 18:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.275 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:01:MGNlYTNjM2JmNWQ1N2JiOTczMTYyNzA5YzllYWIxMTBeGuOF: --dhchap-ctrl-secret DHHC-1:02:M2Q1NTc2ZmNmZjcwMzQwNTY3ZmUxYjdmNDllOGYyMDZjNTcxMDRmYzExZjViY2My55iSBg==: 00:15:53.209 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.209 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.209 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:15:53.209 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.209 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.209 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.209 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:53.209 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:53.209 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:53.209 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:15:53.209 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:53.209 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:53.209 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:53.209 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:53.209 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.209 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.209 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.209 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.209 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.209 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.209 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.776 00:15:53.776 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:53.776 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.776 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:54.034 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.034 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.034 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.034 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.034 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.034 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:54.034 { 00:15:54.034 "cntlid": 5, 00:15:54.034 "qid": 0, 00:15:54.034 "state": "enabled", 00:15:54.034 "thread": "nvmf_tgt_poll_group_000", 00:15:54.034 "listen_address": { 00:15:54.034 "trtype": "TCP", 00:15:54.034 "adrfam": "IPv4", 00:15:54.034 "traddr": "10.0.0.2", 00:15:54.034 "trsvcid": "4420" 00:15:54.034 }, 00:15:54.034 "peer_address": { 00:15:54.034 "trtype": "TCP", 00:15:54.034 "adrfam": "IPv4", 00:15:54.034 "traddr": "10.0.0.1", 00:15:54.034 "trsvcid": "45554" 00:15:54.034 }, 00:15:54.034 "auth": { 00:15:54.034 "state": "completed", 00:15:54.034 "digest": "sha256", 00:15:54.034 "dhgroup": "null" 00:15:54.034 } 00:15:54.034 } 00:15:54.034 ]' 00:15:54.034 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:54.034 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:54.034 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:54.034 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:54.034 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:54.034 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.034 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.034 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.292 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:02:Mjg2MjBkZDBhNzBmNjhiNTAxZjQ2MTliMGVmODY3MGU1NzNiNjFhZTgwZWFkM2ViDAKtvg==: --dhchap-ctrl-secret DHHC-1:01:NzhiNTE1OWIzYzY4N2FiMTk3MWMyM2I2Njk5ZDAzYmFDNOZ8: 00:15:55.225 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.225 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.225 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:15:55.225 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.225 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.225 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.225 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:55.225 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:55.225 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:55.225 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:15:55.225 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:55.225 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:55.225 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:55.225 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:55.225 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.225 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key3 00:15:55.225 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.225 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.225 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.225 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:55.225 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:55.790 00:15:55.790 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:55.790 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.790 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:56.048 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.048 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.048 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.048 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.048 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.048 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:56.048 { 00:15:56.048 "cntlid": 7, 00:15:56.048 "qid": 0, 00:15:56.048 "state": "enabled", 00:15:56.048 "thread": "nvmf_tgt_poll_group_000", 00:15:56.048 "listen_address": { 00:15:56.048 "trtype": "TCP", 00:15:56.048 "adrfam": "IPv4", 00:15:56.048 "traddr": "10.0.0.2", 00:15:56.048 "trsvcid": "4420" 00:15:56.048 }, 00:15:56.048 "peer_address": { 00:15:56.048 "trtype": "TCP", 00:15:56.048 "adrfam": "IPv4", 00:15:56.048 "traddr": "10.0.0.1", 00:15:56.048 "trsvcid": "57476" 00:15:56.048 }, 00:15:56.048 "auth": { 00:15:56.048 "state": "completed", 00:15:56.048 "digest": "sha256", 00:15:56.048 "dhgroup": "null" 00:15:56.048 } 00:15:56.048 } 00:15:56.048 ]' 00:15:56.048 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:56.048 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:56.048 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:56.048 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:56.048 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:56.048 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.048 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.048 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.306 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:03:NDE3MWFiZDY1NjNjOGEyN2FlMmIxZTIwMjlkMGY4ZTUwZmRjNWZlMDI1OTkyYThhYzBiMTkyNjZkYzQ5NmQ2Yr2YyeE=: 00:15:57.239 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.239 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.239 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:15:57.239 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.239 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.239 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.239 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:57.239 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:57.239 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:57.239 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:57.496 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:15:57.496 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:57.496 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:57.496 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:57.496 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:57.496 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.496 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.496 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.496 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.496 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.496 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.496 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.753 00:15:57.753 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:57.753 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.753 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:58.011 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.011 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.011 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.011 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.011 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.011 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:58.011 { 00:15:58.011 "cntlid": 9, 00:15:58.011 "qid": 0, 00:15:58.011 "state": "enabled", 00:15:58.011 "thread": "nvmf_tgt_poll_group_000", 00:15:58.011 "listen_address": { 00:15:58.011 "trtype": "TCP", 00:15:58.011 "adrfam": "IPv4", 00:15:58.011 "traddr": "10.0.0.2", 00:15:58.011 "trsvcid": "4420" 00:15:58.011 }, 00:15:58.011 "peer_address": { 00:15:58.011 "trtype": "TCP", 00:15:58.011 "adrfam": "IPv4", 00:15:58.011 "traddr": "10.0.0.1", 00:15:58.011 "trsvcid": "57506" 00:15:58.011 }, 00:15:58.011 "auth": { 00:15:58.011 "state": "completed", 00:15:58.011 "digest": "sha256", 00:15:58.011 "dhgroup": "ffdhe2048" 00:15:58.011 } 00:15:58.011 } 00:15:58.011 ]' 00:15:58.011 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:58.011 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:58.011 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:58.011 18:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:58.011 18:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:58.268 18:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.268 18:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.268 18:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.525 18:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:00:ZWRkOGJkNTdmNjJjNWIxZmE1NmRkNDcwMzg0NjUxY2I5NThhYzMwNjg3ZTI1YWQ2si6sAA==: --dhchap-ctrl-secret DHHC-1:03:NTEzMGFiMWJjMWQ0YzhlMjU2OWIyNWU0MGE5YjdhODBjOTJhY2U0OTY0NDE4ODQ5ZWYyMTRmNmU2ZDU1OGYwMHskUL4=: 00:15:59.089 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.089 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.089 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:15:59.089 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.089 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.089 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.089 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:59.089 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:59.089 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:59.652 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:15:59.652 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:59.652 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:59.652 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:59.652 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:59.652 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:59.652 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.652 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.652 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.652 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.652 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.652 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.910 00:15:59.910 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:59.910 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.910 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:00.167 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.167 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.167 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.167 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.168 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.168 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:00.168 { 00:16:00.168 "cntlid": 11, 00:16:00.168 "qid": 0, 00:16:00.168 "state": "enabled", 00:16:00.168 "thread": "nvmf_tgt_poll_group_000", 00:16:00.168 "listen_address": { 00:16:00.168 "trtype": "TCP", 00:16:00.168 "adrfam": "IPv4", 00:16:00.168 "traddr": "10.0.0.2", 00:16:00.168 "trsvcid": "4420" 00:16:00.168 }, 00:16:00.168 "peer_address": { 00:16:00.168 "trtype": "TCP", 00:16:00.168 "adrfam": "IPv4", 00:16:00.168 "traddr": "10.0.0.1", 00:16:00.168 "trsvcid": "57528" 00:16:00.168 }, 00:16:00.168 "auth": { 00:16:00.168 "state": "completed", 00:16:00.168 "digest": "sha256", 00:16:00.168 "dhgroup": "ffdhe2048" 00:16:00.168 } 00:16:00.168 } 00:16:00.168 ]' 00:16:00.168 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:00.168 18:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:00.168 18:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:00.168 18:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:00.168 18:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:00.168 18:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.168 18:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.168 18:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.425 18:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:01:MGNlYTNjM2JmNWQ1N2JiOTczMTYyNzA5YzllYWIxMTBeGuOF: --dhchap-ctrl-secret DHHC-1:02:M2Q1NTc2ZmNmZjcwMzQwNTY3ZmUxYjdmNDllOGYyMDZjNTcxMDRmYzExZjViY2My55iSBg==: 00:16:00.990 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.247 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.247 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:16:01.247 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.247 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.247 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.247 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:01.247 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:01.247 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:01.248 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:16:01.248 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:01.248 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:01.248 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:01.248 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:01.248 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.248 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:01.248 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.248 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.248 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.248 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:01.248 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:01.811 00:16:01.811 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:01.811 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:01.811 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.811 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.811 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.811 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.811 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.069 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.069 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:02.069 { 00:16:02.069 "cntlid": 13, 00:16:02.069 "qid": 0, 00:16:02.069 "state": "enabled", 00:16:02.069 "thread": "nvmf_tgt_poll_group_000", 00:16:02.069 "listen_address": { 00:16:02.069 "trtype": "TCP", 00:16:02.069 "adrfam": "IPv4", 00:16:02.069 "traddr": "10.0.0.2", 00:16:02.069 "trsvcid": "4420" 00:16:02.069 }, 00:16:02.069 "peer_address": { 00:16:02.069 "trtype": "TCP", 00:16:02.069 "adrfam": "IPv4", 00:16:02.069 "traddr": "10.0.0.1", 00:16:02.069 "trsvcid": "57554" 00:16:02.069 }, 00:16:02.069 "auth": { 00:16:02.069 "state": "completed", 00:16:02.069 "digest": "sha256", 00:16:02.069 "dhgroup": "ffdhe2048" 00:16:02.069 } 00:16:02.069 } 00:16:02.069 ]' 00:16:02.069 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:02.069 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:02.069 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:02.069 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:02.069 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:02.069 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.069 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.069 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.326 18:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:02:Mjg2MjBkZDBhNzBmNjhiNTAxZjQ2MTliMGVmODY3MGU1NzNiNjFhZTgwZWFkM2ViDAKtvg==: --dhchap-ctrl-secret DHHC-1:01:NzhiNTE1OWIzYzY4N2FiMTk3MWMyM2I2Njk5ZDAzYmFDNOZ8: 00:16:02.892 18:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.892 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.892 18:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:16:02.892 18:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.892 18:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.892 18:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.892 18:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:02.892 18:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:02.892 18:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:03.150 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:16:03.150 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:03.150 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:03.150 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:03.150 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:03.150 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.150 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key3 00:16:03.150 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.150 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.150 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.150 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:03.150 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:03.717 00:16:03.717 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:03.717 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.717 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:03.717 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.717 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.717 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.717 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.975 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.975 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:03.975 { 00:16:03.975 "cntlid": 15, 00:16:03.975 "qid": 0, 00:16:03.975 "state": "enabled", 00:16:03.975 "thread": "nvmf_tgt_poll_group_000", 00:16:03.975 "listen_address": { 00:16:03.975 "trtype": "TCP", 00:16:03.975 "adrfam": "IPv4", 00:16:03.975 "traddr": "10.0.0.2", 00:16:03.975 "trsvcid": "4420" 00:16:03.975 }, 00:16:03.975 "peer_address": { 00:16:03.975 "trtype": "TCP", 00:16:03.975 "adrfam": "IPv4", 00:16:03.975 "traddr": "10.0.0.1", 00:16:03.975 "trsvcid": "57584" 00:16:03.975 }, 00:16:03.975 "auth": { 00:16:03.975 "state": "completed", 00:16:03.975 "digest": "sha256", 00:16:03.975 "dhgroup": "ffdhe2048" 00:16:03.975 } 00:16:03.975 } 00:16:03.975 ]' 00:16:03.975 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:03.975 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:03.975 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:03.975 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:03.975 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:03.975 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.975 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.975 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.233 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:03:NDE3MWFiZDY1NjNjOGEyN2FlMmIxZTIwMjlkMGY4ZTUwZmRjNWZlMDI1OTkyYThhYzBiMTkyNjZkYzQ5NmQ2Yr2YyeE=: 00:16:05.167 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.167 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.167 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:16:05.167 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.167 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.167 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.167 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:05.167 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:05.167 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:05.167 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:05.425 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:16:05.425 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:05.425 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:05.425 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:05.425 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:05.425 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.425 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:05.425 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.425 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.425 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.425 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:05.425 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:05.683 00:16:05.683 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:05.683 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.683 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:05.941 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.941 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.941 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.941 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.941 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.941 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:05.941 { 00:16:05.941 "cntlid": 17, 00:16:05.941 "qid": 0, 00:16:05.941 "state": "enabled", 00:16:05.941 "thread": "nvmf_tgt_poll_group_000", 00:16:05.941 "listen_address": { 00:16:05.941 "trtype": "TCP", 00:16:05.941 "adrfam": "IPv4", 00:16:05.941 "traddr": "10.0.0.2", 00:16:05.941 "trsvcid": "4420" 00:16:05.941 }, 00:16:05.941 "peer_address": { 00:16:05.941 "trtype": "TCP", 00:16:05.941 "adrfam": "IPv4", 00:16:05.941 "traddr": "10.0.0.1", 00:16:05.941 "trsvcid": "54162" 00:16:05.941 }, 00:16:05.941 "auth": { 00:16:05.941 "state": "completed", 00:16:05.941 "digest": "sha256", 00:16:05.941 "dhgroup": "ffdhe3072" 00:16:05.941 } 00:16:05.941 } 00:16:05.941 ]' 00:16:05.941 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:06.200 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:06.200 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:06.200 18:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:06.200 18:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:06.200 18:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.200 18:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.200 18:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.458 18:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:00:ZWRkOGJkNTdmNjJjNWIxZmE1NmRkNDcwMzg0NjUxY2I5NThhYzMwNjg3ZTI1YWQ2si6sAA==: --dhchap-ctrl-secret DHHC-1:03:NTEzMGFiMWJjMWQ0YzhlMjU2OWIyNWU0MGE5YjdhODBjOTJhY2U0OTY0NDE4ODQ5ZWYyMTRmNmU2ZDU1OGYwMHskUL4=: 00:16:07.392 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.392 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.392 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:16:07.392 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.392 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.392 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.392 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:07.392 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:07.392 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:07.650 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:16:07.650 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:07.650 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:07.650 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:07.650 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:07.650 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.650 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.650 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.650 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.650 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.650 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.650 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.908 00:16:07.908 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:07.908 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:07.908 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.165 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.165 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.165 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.165 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.165 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.165 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:08.165 { 00:16:08.165 "cntlid": 19, 00:16:08.165 "qid": 0, 00:16:08.165 "state": "enabled", 00:16:08.165 "thread": "nvmf_tgt_poll_group_000", 00:16:08.165 "listen_address": { 00:16:08.165 "trtype": "TCP", 00:16:08.165 "adrfam": "IPv4", 00:16:08.165 "traddr": "10.0.0.2", 00:16:08.165 "trsvcid": "4420" 00:16:08.165 }, 00:16:08.165 "peer_address": { 00:16:08.165 "trtype": "TCP", 00:16:08.165 "adrfam": "IPv4", 00:16:08.165 "traddr": "10.0.0.1", 00:16:08.165 "trsvcid": "54184" 00:16:08.165 }, 00:16:08.165 "auth": { 00:16:08.165 "state": "completed", 00:16:08.165 "digest": "sha256", 00:16:08.165 "dhgroup": "ffdhe3072" 00:16:08.165 } 00:16:08.165 } 00:16:08.165 ]' 00:16:08.165 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:08.165 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:08.165 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:08.438 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:08.438 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:08.438 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.438 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.438 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.736 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:01:MGNlYTNjM2JmNWQ1N2JiOTczMTYyNzA5YzllYWIxMTBeGuOF: --dhchap-ctrl-secret DHHC-1:02:M2Q1NTc2ZmNmZjcwMzQwNTY3ZmUxYjdmNDllOGYyMDZjNTcxMDRmYzExZjViY2My55iSBg==: 00:16:09.303 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.303 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.303 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:16:09.303 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.303 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.303 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.303 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:09.303 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:09.303 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:09.561 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:16:09.561 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:09.561 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:09.561 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:09.561 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:09.561 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.561 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.561 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.561 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.561 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.561 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.561 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.819 00:16:09.819 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:09.819 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:09.819 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.077 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.077 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.077 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.077 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.336 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.336 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:10.336 { 00:16:10.336 "cntlid": 21, 00:16:10.336 "qid": 0, 00:16:10.336 "state": "enabled", 00:16:10.336 "thread": "nvmf_tgt_poll_group_000", 00:16:10.336 "listen_address": { 00:16:10.336 "trtype": "TCP", 00:16:10.336 "adrfam": "IPv4", 00:16:10.336 "traddr": "10.0.0.2", 00:16:10.336 "trsvcid": "4420" 00:16:10.336 }, 00:16:10.336 "peer_address": { 00:16:10.336 "trtype": "TCP", 00:16:10.336 "adrfam": "IPv4", 00:16:10.336 "traddr": "10.0.0.1", 00:16:10.336 "trsvcid": "54226" 00:16:10.336 }, 00:16:10.336 "auth": { 00:16:10.336 "state": "completed", 00:16:10.336 "digest": "sha256", 00:16:10.336 "dhgroup": "ffdhe3072" 00:16:10.336 } 00:16:10.336 } 00:16:10.336 ]' 00:16:10.336 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:10.336 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:10.336 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:10.336 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:10.336 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:10.336 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.336 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.336 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.594 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:02:Mjg2MjBkZDBhNzBmNjhiNTAxZjQ2MTliMGVmODY3MGU1NzNiNjFhZTgwZWFkM2ViDAKtvg==: --dhchap-ctrl-secret DHHC-1:01:NzhiNTE1OWIzYzY4N2FiMTk3MWMyM2I2Njk5ZDAzYmFDNOZ8: 00:16:11.528 18:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.528 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.528 18:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:16:11.528 18:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.528 18:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.528 18:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.528 18:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:11.528 18:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:11.528 18:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:11.528 18:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:16:11.528 18:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:11.528 18:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:11.528 18:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:11.528 18:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:11.528 18:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.528 18:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key3 00:16:11.528 18:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.528 18:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.528 18:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.528 18:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:11.528 18:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:12.119 00:16:12.119 18:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:12.119 18:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:12.119 18:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.119 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.119 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.119 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.119 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.119 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.119 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:12.119 { 00:16:12.119 "cntlid": 23, 00:16:12.119 "qid": 0, 00:16:12.119 "state": "enabled", 00:16:12.119 "thread": "nvmf_tgt_poll_group_000", 00:16:12.119 "listen_address": { 00:16:12.119 "trtype": "TCP", 00:16:12.119 "adrfam": "IPv4", 00:16:12.119 "traddr": "10.0.0.2", 00:16:12.119 "trsvcid": "4420" 00:16:12.119 }, 00:16:12.119 "peer_address": { 00:16:12.119 "trtype": "TCP", 00:16:12.119 "adrfam": "IPv4", 00:16:12.119 "traddr": "10.0.0.1", 00:16:12.119 "trsvcid": "54270" 00:16:12.119 }, 00:16:12.119 "auth": { 00:16:12.119 "state": "completed", 00:16:12.119 "digest": "sha256", 00:16:12.119 "dhgroup": "ffdhe3072" 00:16:12.119 } 00:16:12.119 } 00:16:12.119 ]' 00:16:12.119 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:12.377 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:12.377 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:12.377 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:12.377 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:12.377 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.377 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.377 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.635 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:03:NDE3MWFiZDY1NjNjOGEyN2FlMmIxZTIwMjlkMGY4ZTUwZmRjNWZlMDI1OTkyYThhYzBiMTkyNjZkYzQ5NmQ2Yr2YyeE=: 00:16:13.568 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.568 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.568 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:16:13.568 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.568 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.568 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.568 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:13.568 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:13.568 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:13.568 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:13.827 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:16:13.827 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:13.827 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:13.827 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:13.827 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:13.827 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.827 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.827 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.827 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.827 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.827 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.827 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:14.085 00:16:14.085 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:14.085 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:14.085 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.343 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.343 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.343 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.343 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.343 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.343 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:14.343 { 00:16:14.343 "cntlid": 25, 00:16:14.343 "qid": 0, 00:16:14.343 "state": "enabled", 00:16:14.343 "thread": "nvmf_tgt_poll_group_000", 00:16:14.343 "listen_address": { 00:16:14.343 "trtype": "TCP", 00:16:14.343 "adrfam": "IPv4", 00:16:14.343 "traddr": "10.0.0.2", 00:16:14.343 "trsvcid": "4420" 00:16:14.343 }, 00:16:14.343 "peer_address": { 00:16:14.343 "trtype": "TCP", 00:16:14.343 "adrfam": "IPv4", 00:16:14.343 "traddr": "10.0.0.1", 00:16:14.343 "trsvcid": "50530" 00:16:14.343 }, 00:16:14.343 "auth": { 00:16:14.343 "state": "completed", 00:16:14.343 "digest": "sha256", 00:16:14.343 "dhgroup": "ffdhe4096" 00:16:14.343 } 00:16:14.343 } 00:16:14.343 ]' 00:16:14.343 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:14.602 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:14.602 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:14.602 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:14.602 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:14.602 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.602 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.602 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.861 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:00:ZWRkOGJkNTdmNjJjNWIxZmE1NmRkNDcwMzg0NjUxY2I5NThhYzMwNjg3ZTI1YWQ2si6sAA==: --dhchap-ctrl-secret DHHC-1:03:NTEzMGFiMWJjMWQ0YzhlMjU2OWIyNWU0MGE5YjdhODBjOTJhY2U0OTY0NDE4ODQ5ZWYyMTRmNmU2ZDU1OGYwMHskUL4=: 00:16:15.446 18:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.446 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.446 18:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:16:15.446 18:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.446 18:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.446 18:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.446 18:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:15.446 18:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:15.446 18:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:16.013 18:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:16:16.013 18:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:16.013 18:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:16.013 18:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:16.013 18:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:16.013 18:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.013 18:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.013 18:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.013 18:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.013 18:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.013 18:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.013 18:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.272 00:16:16.272 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:16.272 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:16.272 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.530 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.530 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.530 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.530 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.530 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.530 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:16.530 { 00:16:16.530 "cntlid": 27, 00:16:16.530 "qid": 0, 00:16:16.530 "state": "enabled", 00:16:16.530 "thread": "nvmf_tgt_poll_group_000", 00:16:16.530 "listen_address": { 00:16:16.530 "trtype": "TCP", 00:16:16.530 "adrfam": "IPv4", 00:16:16.530 "traddr": "10.0.0.2", 00:16:16.530 "trsvcid": "4420" 00:16:16.530 }, 00:16:16.530 "peer_address": { 00:16:16.530 "trtype": "TCP", 00:16:16.530 "adrfam": "IPv4", 00:16:16.530 "traddr": "10.0.0.1", 00:16:16.530 "trsvcid": "50564" 00:16:16.530 }, 00:16:16.530 "auth": { 00:16:16.530 "state": "completed", 00:16:16.530 "digest": "sha256", 00:16:16.530 "dhgroup": "ffdhe4096" 00:16:16.530 } 00:16:16.530 } 00:16:16.530 ]' 00:16:16.530 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:16.530 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:16.530 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:16.530 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:16.530 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:16.788 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.788 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.788 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.047 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:01:MGNlYTNjM2JmNWQ1N2JiOTczMTYyNzA5YzllYWIxMTBeGuOF: --dhchap-ctrl-secret DHHC-1:02:M2Q1NTc2ZmNmZjcwMzQwNTY3ZmUxYjdmNDllOGYyMDZjNTcxMDRmYzExZjViY2My55iSBg==: 00:16:17.612 18:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.612 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.612 18:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:16:17.612 18:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.612 18:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.612 18:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.612 18:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:17.612 18:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:17.612 18:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:17.869 18:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:16:17.869 18:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:17.869 18:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:17.869 18:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:17.869 18:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:17.869 18:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.869 18:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.869 18:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.869 18:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.869 18:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.869 18:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.869 18:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.435 00:16:18.435 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:18.435 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:18.435 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.693 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.693 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.693 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.693 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.693 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.693 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:18.693 { 00:16:18.693 "cntlid": 29, 00:16:18.693 "qid": 0, 00:16:18.694 "state": "enabled", 00:16:18.694 "thread": "nvmf_tgt_poll_group_000", 00:16:18.694 "listen_address": { 00:16:18.694 "trtype": "TCP", 00:16:18.694 "adrfam": "IPv4", 00:16:18.694 "traddr": "10.0.0.2", 00:16:18.694 "trsvcid": "4420" 00:16:18.694 }, 00:16:18.694 "peer_address": { 00:16:18.694 "trtype": "TCP", 00:16:18.694 "adrfam": "IPv4", 00:16:18.694 "traddr": "10.0.0.1", 00:16:18.694 "trsvcid": "50584" 00:16:18.694 }, 00:16:18.694 "auth": { 00:16:18.694 "state": "completed", 00:16:18.694 "digest": "sha256", 00:16:18.694 "dhgroup": "ffdhe4096" 00:16:18.694 } 00:16:18.694 } 00:16:18.694 ]' 00:16:18.694 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:18.694 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:18.694 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:18.694 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:18.694 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:18.694 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.694 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.694 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.952 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:02:Mjg2MjBkZDBhNzBmNjhiNTAxZjQ2MTliMGVmODY3MGU1NzNiNjFhZTgwZWFkM2ViDAKtvg==: --dhchap-ctrl-secret DHHC-1:01:NzhiNTE1OWIzYzY4N2FiMTk3MWMyM2I2Njk5ZDAzYmFDNOZ8: 00:16:19.886 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.886 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.886 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:16:19.886 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.886 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.886 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.886 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:19.886 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:19.886 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:19.886 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:16:19.886 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:19.886 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:19.886 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:19.886 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:19.886 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.886 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key3 00:16:19.886 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.886 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.886 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.886 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:19.886 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:20.454 00:16:20.454 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:20.454 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:20.454 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.731 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.731 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.731 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.731 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.731 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.731 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:20.731 { 00:16:20.731 "cntlid": 31, 00:16:20.731 "qid": 0, 00:16:20.731 "state": "enabled", 00:16:20.731 "thread": "nvmf_tgt_poll_group_000", 00:16:20.731 "listen_address": { 00:16:20.731 "trtype": "TCP", 00:16:20.731 "adrfam": "IPv4", 00:16:20.731 "traddr": "10.0.0.2", 00:16:20.731 "trsvcid": "4420" 00:16:20.731 }, 00:16:20.731 "peer_address": { 00:16:20.731 "trtype": "TCP", 00:16:20.731 "adrfam": "IPv4", 00:16:20.731 "traddr": "10.0.0.1", 00:16:20.731 "trsvcid": "50614" 00:16:20.731 }, 00:16:20.731 "auth": { 00:16:20.731 "state": "completed", 00:16:20.731 "digest": "sha256", 00:16:20.731 "dhgroup": "ffdhe4096" 00:16:20.732 } 00:16:20.732 } 00:16:20.732 ]' 00:16:20.732 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:20.732 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:20.732 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:20.732 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:20.732 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:20.732 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.732 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.732 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.990 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:03:NDE3MWFiZDY1NjNjOGEyN2FlMmIxZTIwMjlkMGY4ZTUwZmRjNWZlMDI1OTkyYThhYzBiMTkyNjZkYzQ5NmQ2Yr2YyeE=: 00:16:21.925 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.925 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.925 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:16:21.925 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.925 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.925 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.925 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:21.925 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:21.925 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:21.925 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:22.183 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:16:22.183 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:22.183 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:22.183 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:22.183 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:22.183 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.183 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.183 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.183 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.183 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.183 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.183 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.441 00:16:22.441 18:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:22.441 18:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:22.441 18:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.007 18:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.007 18:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.007 18:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.007 18:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.007 18:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.007 18:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:23.007 { 00:16:23.007 "cntlid": 33, 00:16:23.007 "qid": 0, 00:16:23.007 "state": "enabled", 00:16:23.007 "thread": "nvmf_tgt_poll_group_000", 00:16:23.007 "listen_address": { 00:16:23.007 "trtype": "TCP", 00:16:23.007 "adrfam": "IPv4", 00:16:23.007 "traddr": "10.0.0.2", 00:16:23.007 "trsvcid": "4420" 00:16:23.007 }, 00:16:23.007 "peer_address": { 00:16:23.007 "trtype": "TCP", 00:16:23.007 "adrfam": "IPv4", 00:16:23.007 "traddr": "10.0.0.1", 00:16:23.007 "trsvcid": "50640" 00:16:23.007 }, 00:16:23.007 "auth": { 00:16:23.007 "state": "completed", 00:16:23.007 "digest": "sha256", 00:16:23.007 "dhgroup": "ffdhe6144" 00:16:23.007 } 00:16:23.007 } 00:16:23.007 ]' 00:16:23.007 18:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:23.007 18:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:23.007 18:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:23.007 18:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:23.007 18:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:23.007 18:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.007 18:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.007 18:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.265 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:00:ZWRkOGJkNTdmNjJjNWIxZmE1NmRkNDcwMzg0NjUxY2I5NThhYzMwNjg3ZTI1YWQ2si6sAA==: --dhchap-ctrl-secret DHHC-1:03:NTEzMGFiMWJjMWQ0YzhlMjU2OWIyNWU0MGE5YjdhODBjOTJhY2U0OTY0NDE4ODQ5ZWYyMTRmNmU2ZDU1OGYwMHskUL4=: 00:16:24.199 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.199 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.199 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:16:24.199 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.199 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.199 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.199 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:24.199 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:24.199 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:24.457 18:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:16:24.457 18:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:24.457 18:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:24.457 18:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:24.457 18:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:24.457 18:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.457 18:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.457 18:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.457 18:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.457 18:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.457 18:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.457 18:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.022 00:16:25.022 18:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:25.022 18:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:25.022 18:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.280 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.280 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.280 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.280 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.280 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.280 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:25.280 { 00:16:25.280 "cntlid": 35, 00:16:25.280 "qid": 0, 00:16:25.280 "state": "enabled", 00:16:25.280 "thread": "nvmf_tgt_poll_group_000", 00:16:25.280 "listen_address": { 00:16:25.280 "trtype": "TCP", 00:16:25.280 "adrfam": "IPv4", 00:16:25.280 "traddr": "10.0.0.2", 00:16:25.280 "trsvcid": "4420" 00:16:25.280 }, 00:16:25.280 "peer_address": { 00:16:25.280 "trtype": "TCP", 00:16:25.280 "adrfam": "IPv4", 00:16:25.280 "traddr": "10.0.0.1", 00:16:25.280 "trsvcid": "37056" 00:16:25.280 }, 00:16:25.280 "auth": { 00:16:25.280 "state": "completed", 00:16:25.280 "digest": "sha256", 00:16:25.280 "dhgroup": "ffdhe6144" 00:16:25.280 } 00:16:25.280 } 00:16:25.280 ]' 00:16:25.280 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:25.280 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:25.280 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:25.280 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:25.280 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:25.280 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.280 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.280 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.846 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:01:MGNlYTNjM2JmNWQ1N2JiOTczMTYyNzA5YzllYWIxMTBeGuOF: --dhchap-ctrl-secret DHHC-1:02:M2Q1NTc2ZmNmZjcwMzQwNTY3ZmUxYjdmNDllOGYyMDZjNTcxMDRmYzExZjViY2My55iSBg==: 00:16:26.413 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.413 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.413 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:16:26.413 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.413 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.413 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.413 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:26.413 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:26.413 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:26.672 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:16:26.672 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:26.672 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:26.672 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:26.672 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:26.672 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.672 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.672 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.672 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.672 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.672 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.672 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.238 00:16:27.238 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:27.238 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:27.238 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.496 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.496 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.496 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.496 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.496 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.496 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:27.496 { 00:16:27.496 "cntlid": 37, 00:16:27.496 "qid": 0, 00:16:27.496 "state": "enabled", 00:16:27.496 "thread": "nvmf_tgt_poll_group_000", 00:16:27.496 "listen_address": { 00:16:27.496 "trtype": "TCP", 00:16:27.496 "adrfam": "IPv4", 00:16:27.496 "traddr": "10.0.0.2", 00:16:27.496 "trsvcid": "4420" 00:16:27.496 }, 00:16:27.496 "peer_address": { 00:16:27.496 "trtype": "TCP", 00:16:27.496 "adrfam": "IPv4", 00:16:27.496 "traddr": "10.0.0.1", 00:16:27.496 "trsvcid": "37080" 00:16:27.496 }, 00:16:27.496 "auth": { 00:16:27.496 "state": "completed", 00:16:27.496 "digest": "sha256", 00:16:27.496 "dhgroup": "ffdhe6144" 00:16:27.496 } 00:16:27.496 } 00:16:27.496 ]' 00:16:27.496 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:27.496 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:27.496 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:27.496 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:27.496 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:27.496 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.496 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.496 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.061 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:02:Mjg2MjBkZDBhNzBmNjhiNTAxZjQ2MTliMGVmODY3MGU1NzNiNjFhZTgwZWFkM2ViDAKtvg==: --dhchap-ctrl-secret DHHC-1:01:NzhiNTE1OWIzYzY4N2FiMTk3MWMyM2I2Njk5ZDAzYmFDNOZ8: 00:16:28.624 18:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.624 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.624 18:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:16:28.624 18:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.624 18:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.624 18:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.624 18:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:28.624 18:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:28.624 18:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:28.881 18:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:16:28.881 18:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:28.881 18:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:28.881 18:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:28.881 18:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:28.881 18:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.881 18:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key3 00:16:28.881 18:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.881 18:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.881 18:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.881 18:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:28.881 18:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:29.446 00:16:29.446 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:29.446 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.446 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:29.703 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.704 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.704 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.704 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.704 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.704 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:29.704 { 00:16:29.704 "cntlid": 39, 00:16:29.704 "qid": 0, 00:16:29.704 "state": "enabled", 00:16:29.704 "thread": "nvmf_tgt_poll_group_000", 00:16:29.704 "listen_address": { 00:16:29.704 "trtype": "TCP", 00:16:29.704 "adrfam": "IPv4", 00:16:29.704 "traddr": "10.0.0.2", 00:16:29.704 "trsvcid": "4420" 00:16:29.704 }, 00:16:29.704 "peer_address": { 00:16:29.704 "trtype": "TCP", 00:16:29.704 "adrfam": "IPv4", 00:16:29.704 "traddr": "10.0.0.1", 00:16:29.704 "trsvcid": "37104" 00:16:29.704 }, 00:16:29.704 "auth": { 00:16:29.704 "state": "completed", 00:16:29.704 "digest": "sha256", 00:16:29.704 "dhgroup": "ffdhe6144" 00:16:29.704 } 00:16:29.704 } 00:16:29.704 ]' 00:16:29.704 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:29.704 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:29.961 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:29.961 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:29.961 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:29.961 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.961 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.961 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.218 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:03:NDE3MWFiZDY1NjNjOGEyN2FlMmIxZTIwMjlkMGY4ZTUwZmRjNWZlMDI1OTkyYThhYzBiMTkyNjZkYzQ5NmQ2Yr2YyeE=: 00:16:31.149 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.149 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.150 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:16:31.150 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.150 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.150 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.150 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:31.150 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:31.150 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:31.150 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:31.407 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:16:31.407 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:31.407 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:31.407 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:31.407 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:31.407 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.407 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.407 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.407 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.407 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.407 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.407 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.973 00:16:31.973 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:31.973 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:31.973 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.232 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.232 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.232 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.232 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.232 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.232 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:32.232 { 00:16:32.232 "cntlid": 41, 00:16:32.232 "qid": 0, 00:16:32.232 "state": "enabled", 00:16:32.232 "thread": "nvmf_tgt_poll_group_000", 00:16:32.232 "listen_address": { 00:16:32.232 "trtype": "TCP", 00:16:32.232 "adrfam": "IPv4", 00:16:32.232 "traddr": "10.0.0.2", 00:16:32.232 "trsvcid": "4420" 00:16:32.232 }, 00:16:32.232 "peer_address": { 00:16:32.232 "trtype": "TCP", 00:16:32.232 "adrfam": "IPv4", 00:16:32.232 "traddr": "10.0.0.1", 00:16:32.232 "trsvcid": "37126" 00:16:32.232 }, 00:16:32.232 "auth": { 00:16:32.232 "state": "completed", 00:16:32.232 "digest": "sha256", 00:16:32.232 "dhgroup": "ffdhe8192" 00:16:32.232 } 00:16:32.232 } 00:16:32.232 ]' 00:16:32.232 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:32.232 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:32.232 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:32.489 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:32.489 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:32.489 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.489 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.489 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.747 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:00:ZWRkOGJkNTdmNjJjNWIxZmE1NmRkNDcwMzg0NjUxY2I5NThhYzMwNjg3ZTI1YWQ2si6sAA==: --dhchap-ctrl-secret DHHC-1:03:NTEzMGFiMWJjMWQ0YzhlMjU2OWIyNWU0MGE5YjdhODBjOTJhY2U0OTY0NDE4ODQ5ZWYyMTRmNmU2ZDU1OGYwMHskUL4=: 00:16:33.319 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.319 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.319 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:16:33.319 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.319 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.319 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.319 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:33.319 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:33.319 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:33.584 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:16:33.584 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:33.584 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:33.584 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:33.584 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:33.584 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.584 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.584 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.584 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.584 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.584 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.584 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.517 00:16:34.517 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:34.517 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:34.517 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.517 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.517 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.517 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.517 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.517 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.517 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:34.517 { 00:16:34.517 "cntlid": 43, 00:16:34.517 "qid": 0, 00:16:34.517 "state": "enabled", 00:16:34.517 "thread": "nvmf_tgt_poll_group_000", 00:16:34.517 "listen_address": { 00:16:34.517 "trtype": "TCP", 00:16:34.517 "adrfam": "IPv4", 00:16:34.517 "traddr": "10.0.0.2", 00:16:34.517 "trsvcid": "4420" 00:16:34.517 }, 00:16:34.517 "peer_address": { 00:16:34.517 "trtype": "TCP", 00:16:34.517 "adrfam": "IPv4", 00:16:34.517 "traddr": "10.0.0.1", 00:16:34.517 "trsvcid": "51708" 00:16:34.517 }, 00:16:34.517 "auth": { 00:16:34.517 "state": "completed", 00:16:34.517 "digest": "sha256", 00:16:34.517 "dhgroup": "ffdhe8192" 00:16:34.517 } 00:16:34.517 } 00:16:34.517 ]' 00:16:34.517 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:34.775 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:34.775 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:34.775 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:34.775 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:34.775 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.775 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.775 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.032 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:01:MGNlYTNjM2JmNWQ1N2JiOTczMTYyNzA5YzllYWIxMTBeGuOF: --dhchap-ctrl-secret DHHC-1:02:M2Q1NTc2ZmNmZjcwMzQwNTY3ZmUxYjdmNDllOGYyMDZjNTcxMDRmYzExZjViY2My55iSBg==: 00:16:35.966 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.966 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.966 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:16:35.966 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.966 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.966 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.966 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:35.966 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:35.966 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:35.966 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:16:35.966 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:35.966 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:35.966 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:35.966 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:35.966 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.966 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.966 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.966 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.966 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.966 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.966 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.582 00:16:36.582 18:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:36.582 18:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.582 18:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:36.841 18:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.841 18:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.841 18:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.841 18:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.841 18:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.841 18:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:36.841 { 00:16:36.841 "cntlid": 45, 00:16:36.841 "qid": 0, 00:16:36.841 "state": "enabled", 00:16:36.841 "thread": "nvmf_tgt_poll_group_000", 00:16:36.841 "listen_address": { 00:16:36.841 "trtype": "TCP", 00:16:36.841 "adrfam": "IPv4", 00:16:36.841 "traddr": "10.0.0.2", 00:16:36.841 "trsvcid": "4420" 00:16:36.841 }, 00:16:36.841 "peer_address": { 00:16:36.841 "trtype": "TCP", 00:16:36.841 "adrfam": "IPv4", 00:16:36.841 "traddr": "10.0.0.1", 00:16:36.841 "trsvcid": "51746" 00:16:36.841 }, 00:16:36.841 "auth": { 00:16:36.841 "state": "completed", 00:16:36.841 "digest": "sha256", 00:16:36.841 "dhgroup": "ffdhe8192" 00:16:36.841 } 00:16:36.841 } 00:16:36.841 ]' 00:16:36.841 18:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:36.841 18:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:36.841 18:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:37.099 18:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:37.099 18:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:37.099 18:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.099 18:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.099 18:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.357 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:02:Mjg2MjBkZDBhNzBmNjhiNTAxZjQ2MTliMGVmODY3MGU1NzNiNjFhZTgwZWFkM2ViDAKtvg==: --dhchap-ctrl-secret DHHC-1:01:NzhiNTE1OWIzYzY4N2FiMTk3MWMyM2I2Njk5ZDAzYmFDNOZ8: 00:16:37.922 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.922 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.922 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:16:37.922 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.922 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.922 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.922 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:37.922 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:37.922 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:38.488 18:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:16:38.488 18:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:38.488 18:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:38.488 18:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:38.488 18:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:38.488 18:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.488 18:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key3 00:16:38.488 18:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.488 18:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.488 18:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.488 18:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:38.488 18:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:39.054 00:16:39.054 18:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:39.054 18:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:39.054 18:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.325 18:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.325 18:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.325 18:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.325 18:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.325 18:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.325 18:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:39.325 { 00:16:39.325 "cntlid": 47, 00:16:39.325 "qid": 0, 00:16:39.325 "state": "enabled", 00:16:39.325 "thread": "nvmf_tgt_poll_group_000", 00:16:39.325 "listen_address": { 00:16:39.325 "trtype": "TCP", 00:16:39.325 "adrfam": "IPv4", 00:16:39.325 "traddr": "10.0.0.2", 00:16:39.325 "trsvcid": "4420" 00:16:39.325 }, 00:16:39.325 "peer_address": { 00:16:39.325 "trtype": "TCP", 00:16:39.325 "adrfam": "IPv4", 00:16:39.325 "traddr": "10.0.0.1", 00:16:39.325 "trsvcid": "51778" 00:16:39.325 }, 00:16:39.325 "auth": { 00:16:39.325 "state": "completed", 00:16:39.325 "digest": "sha256", 00:16:39.325 "dhgroup": "ffdhe8192" 00:16:39.325 } 00:16:39.325 } 00:16:39.325 ]' 00:16:39.325 18:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:39.325 18:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:39.325 18:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:39.325 18:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:39.325 18:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:39.325 18:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.325 18:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.325 18:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.891 18:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:03:NDE3MWFiZDY1NjNjOGEyN2FlMmIxZTIwMjlkMGY4ZTUwZmRjNWZlMDI1OTkyYThhYzBiMTkyNjZkYzQ5NmQ2Yr2YyeE=: 00:16:40.457 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.457 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.457 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:16:40.457 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.457 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.457 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.457 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:40.457 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:40.457 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:40.457 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:40.457 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:40.715 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:16:40.715 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:40.715 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:40.715 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:40.715 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:40.715 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.715 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.715 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.715 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.715 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.715 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.715 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.972 00:16:40.972 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:40.972 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:40.972 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.231 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.231 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.231 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.231 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.231 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.231 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:41.231 { 00:16:41.231 "cntlid": 49, 00:16:41.231 "qid": 0, 00:16:41.231 "state": "enabled", 00:16:41.231 "thread": "nvmf_tgt_poll_group_000", 00:16:41.231 "listen_address": { 00:16:41.231 "trtype": "TCP", 00:16:41.231 "adrfam": "IPv4", 00:16:41.231 "traddr": "10.0.0.2", 00:16:41.231 "trsvcid": "4420" 00:16:41.231 }, 00:16:41.231 "peer_address": { 00:16:41.231 "trtype": "TCP", 00:16:41.231 "adrfam": "IPv4", 00:16:41.231 "traddr": "10.0.0.1", 00:16:41.231 "trsvcid": "51794" 00:16:41.231 }, 00:16:41.231 "auth": { 00:16:41.231 "state": "completed", 00:16:41.231 "digest": "sha384", 00:16:41.231 "dhgroup": "null" 00:16:41.231 } 00:16:41.231 } 00:16:41.231 ]' 00:16:41.231 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:41.516 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:41.516 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:41.516 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:41.516 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:41.516 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.516 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.516 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.774 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:00:ZWRkOGJkNTdmNjJjNWIxZmE1NmRkNDcwMzg0NjUxY2I5NThhYzMwNjg3ZTI1YWQ2si6sAA==: --dhchap-ctrl-secret DHHC-1:03:NTEzMGFiMWJjMWQ0YzhlMjU2OWIyNWU0MGE5YjdhODBjOTJhY2U0OTY0NDE4ODQ5ZWYyMTRmNmU2ZDU1OGYwMHskUL4=: 00:16:42.339 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.339 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.339 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:16:42.339 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.339 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.339 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.339 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:42.339 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:42.339 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:42.597 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:16:42.597 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:42.597 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:42.597 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:42.597 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:42.597 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.597 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.597 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.597 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.597 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.597 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.597 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.856 00:16:42.856 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:42.856 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.856 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:43.114 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.114 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.114 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.114 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.114 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.114 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:43.114 { 00:16:43.114 "cntlid": 51, 00:16:43.114 "qid": 0, 00:16:43.114 "state": "enabled", 00:16:43.114 "thread": "nvmf_tgt_poll_group_000", 00:16:43.114 "listen_address": { 00:16:43.114 "trtype": "TCP", 00:16:43.114 "adrfam": "IPv4", 00:16:43.114 "traddr": "10.0.0.2", 00:16:43.114 "trsvcid": "4420" 00:16:43.114 }, 00:16:43.114 "peer_address": { 00:16:43.114 "trtype": "TCP", 00:16:43.114 "adrfam": "IPv4", 00:16:43.114 "traddr": "10.0.0.1", 00:16:43.114 "trsvcid": "51822" 00:16:43.114 }, 00:16:43.114 "auth": { 00:16:43.114 "state": "completed", 00:16:43.114 "digest": "sha384", 00:16:43.114 "dhgroup": "null" 00:16:43.114 } 00:16:43.114 } 00:16:43.114 ]' 00:16:43.114 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:43.372 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:43.372 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:43.372 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:43.372 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:43.372 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.372 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.372 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.630 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:01:MGNlYTNjM2JmNWQ1N2JiOTczMTYyNzA5YzllYWIxMTBeGuOF: --dhchap-ctrl-secret DHHC-1:02:M2Q1NTc2ZmNmZjcwMzQwNTY3ZmUxYjdmNDllOGYyMDZjNTcxMDRmYzExZjViY2My55iSBg==: 00:16:44.231 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.231 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:16:44.231 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.231 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.231 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.231 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:44.231 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:44.231 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:44.489 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:16:44.489 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:44.490 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:44.490 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:44.490 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:44.490 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.490 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.490 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.490 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.490 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.490 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.490 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.057 00:16:45.057 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:45.057 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:45.057 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.315 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.315 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.315 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.316 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.316 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.316 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:45.316 { 00:16:45.316 "cntlid": 53, 00:16:45.316 "qid": 0, 00:16:45.316 "state": "enabled", 00:16:45.316 "thread": "nvmf_tgt_poll_group_000", 00:16:45.316 "listen_address": { 00:16:45.316 "trtype": "TCP", 00:16:45.316 "adrfam": "IPv4", 00:16:45.316 "traddr": "10.0.0.2", 00:16:45.316 "trsvcid": "4420" 00:16:45.316 }, 00:16:45.316 "peer_address": { 00:16:45.316 "trtype": "TCP", 00:16:45.316 "adrfam": "IPv4", 00:16:45.316 "traddr": "10.0.0.1", 00:16:45.316 "trsvcid": "60922" 00:16:45.316 }, 00:16:45.316 "auth": { 00:16:45.316 "state": "completed", 00:16:45.316 "digest": "sha384", 00:16:45.316 "dhgroup": "null" 00:16:45.316 } 00:16:45.316 } 00:16:45.316 ]' 00:16:45.316 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:45.316 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:45.316 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:45.316 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:45.316 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:45.316 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.316 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.316 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.574 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:02:Mjg2MjBkZDBhNzBmNjhiNTAxZjQ2MTliMGVmODY3MGU1NzNiNjFhZTgwZWFkM2ViDAKtvg==: --dhchap-ctrl-secret DHHC-1:01:NzhiNTE1OWIzYzY4N2FiMTk3MWMyM2I2Njk5ZDAzYmFDNOZ8: 00:16:46.509 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.509 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:16:46.509 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.509 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.509 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.509 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:46.509 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:46.509 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:46.767 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:16:46.767 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:46.767 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:46.767 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:46.767 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:46.767 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.767 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key3 00:16:46.767 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.767 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.767 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.767 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:46.767 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:47.025 00:16:47.025 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:47.025 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:47.025 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.285 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.285 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.285 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.285 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.285 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.285 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:47.285 { 00:16:47.285 "cntlid": 55, 00:16:47.285 "qid": 0, 00:16:47.285 "state": "enabled", 00:16:47.285 "thread": "nvmf_tgt_poll_group_000", 00:16:47.285 "listen_address": { 00:16:47.285 "trtype": "TCP", 00:16:47.285 "adrfam": "IPv4", 00:16:47.285 "traddr": "10.0.0.2", 00:16:47.285 "trsvcid": "4420" 00:16:47.285 }, 00:16:47.285 "peer_address": { 00:16:47.285 "trtype": "TCP", 00:16:47.285 "adrfam": "IPv4", 00:16:47.285 "traddr": "10.0.0.1", 00:16:47.285 "trsvcid": "60952" 00:16:47.285 }, 00:16:47.285 "auth": { 00:16:47.285 "state": "completed", 00:16:47.285 "digest": "sha384", 00:16:47.285 "dhgroup": "null" 00:16:47.285 } 00:16:47.285 } 00:16:47.285 ]' 00:16:47.285 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:47.285 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:47.544 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:47.544 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:47.544 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:47.544 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.544 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.544 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.803 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:03:NDE3MWFiZDY1NjNjOGEyN2FlMmIxZTIwMjlkMGY4ZTUwZmRjNWZlMDI1OTkyYThhYzBiMTkyNjZkYzQ5NmQ2Yr2YyeE=: 00:16:48.370 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.370 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.370 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:16:48.370 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.370 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.629 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.629 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:48.629 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:48.629 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:48.629 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:48.629 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:16:48.629 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:48.629 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:48.629 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:48.629 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:48.629 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.629 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.629 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.629 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.629 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.629 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.629 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.206 00:16:49.206 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:49.206 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:49.206 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.465 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.465 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.465 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.465 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.465 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.465 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:49.465 { 00:16:49.465 "cntlid": 57, 00:16:49.465 "qid": 0, 00:16:49.465 "state": "enabled", 00:16:49.465 "thread": "nvmf_tgt_poll_group_000", 00:16:49.465 "listen_address": { 00:16:49.465 "trtype": "TCP", 00:16:49.465 "adrfam": "IPv4", 00:16:49.465 "traddr": "10.0.0.2", 00:16:49.465 "trsvcid": "4420" 00:16:49.465 }, 00:16:49.465 "peer_address": { 00:16:49.465 "trtype": "TCP", 00:16:49.465 "adrfam": "IPv4", 00:16:49.465 "traddr": "10.0.0.1", 00:16:49.465 "trsvcid": "60980" 00:16:49.465 }, 00:16:49.465 "auth": { 00:16:49.465 "state": "completed", 00:16:49.465 "digest": "sha384", 00:16:49.465 "dhgroup": "ffdhe2048" 00:16:49.465 } 00:16:49.465 } 00:16:49.465 ]' 00:16:49.465 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:49.465 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:49.465 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:49.465 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:49.465 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:49.465 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.465 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.465 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.723 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:00:ZWRkOGJkNTdmNjJjNWIxZmE1NmRkNDcwMzg0NjUxY2I5NThhYzMwNjg3ZTI1YWQ2si6sAA==: --dhchap-ctrl-secret DHHC-1:03:NTEzMGFiMWJjMWQ0YzhlMjU2OWIyNWU0MGE5YjdhODBjOTJhY2U0OTY0NDE4ODQ5ZWYyMTRmNmU2ZDU1OGYwMHskUL4=: 00:16:50.657 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.657 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.657 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:16:50.657 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.657 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.657 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.657 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:50.657 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:50.657 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:50.657 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:16:50.657 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:50.657 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:50.657 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:50.657 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:50.657 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.657 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.657 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.657 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.657 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.657 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.657 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.223 00:16:51.223 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:51.223 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.223 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:51.481 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.481 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.481 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.481 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.481 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.481 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:51.481 { 00:16:51.481 "cntlid": 59, 00:16:51.481 "qid": 0, 00:16:51.481 "state": "enabled", 00:16:51.481 "thread": "nvmf_tgt_poll_group_000", 00:16:51.481 "listen_address": { 00:16:51.481 "trtype": "TCP", 00:16:51.481 "adrfam": "IPv4", 00:16:51.481 "traddr": "10.0.0.2", 00:16:51.481 "trsvcid": "4420" 00:16:51.481 }, 00:16:51.481 "peer_address": { 00:16:51.481 "trtype": "TCP", 00:16:51.481 "adrfam": "IPv4", 00:16:51.481 "traddr": "10.0.0.1", 00:16:51.481 "trsvcid": "32776" 00:16:51.481 }, 00:16:51.481 "auth": { 00:16:51.481 "state": "completed", 00:16:51.481 "digest": "sha384", 00:16:51.481 "dhgroup": "ffdhe2048" 00:16:51.481 } 00:16:51.481 } 00:16:51.481 ]' 00:16:51.481 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:51.481 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:51.481 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:51.481 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:51.481 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:51.481 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.481 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.481 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.751 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:01:MGNlYTNjM2JmNWQ1N2JiOTczMTYyNzA5YzllYWIxMTBeGuOF: --dhchap-ctrl-secret DHHC-1:02:M2Q1NTc2ZmNmZjcwMzQwNTY3ZmUxYjdmNDllOGYyMDZjNTcxMDRmYzExZjViY2My55iSBg==: 00:16:52.684 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.684 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.684 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:16:52.684 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.684 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.684 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.684 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:52.684 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:52.684 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:52.942 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:16:52.942 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:52.942 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:52.942 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:52.942 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:52.942 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.942 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.942 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.942 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.942 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.942 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.942 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.199 00:16:53.199 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:53.199 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.199 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:53.457 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.457 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.457 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.457 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.457 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.457 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:53.457 { 00:16:53.457 "cntlid": 61, 00:16:53.457 "qid": 0, 00:16:53.457 "state": "enabled", 00:16:53.457 "thread": "nvmf_tgt_poll_group_000", 00:16:53.457 "listen_address": { 00:16:53.457 "trtype": "TCP", 00:16:53.457 "adrfam": "IPv4", 00:16:53.457 "traddr": "10.0.0.2", 00:16:53.457 "trsvcid": "4420" 00:16:53.457 }, 00:16:53.457 "peer_address": { 00:16:53.457 "trtype": "TCP", 00:16:53.457 "adrfam": "IPv4", 00:16:53.457 "traddr": "10.0.0.1", 00:16:53.457 "trsvcid": "32814" 00:16:53.457 }, 00:16:53.457 "auth": { 00:16:53.457 "state": "completed", 00:16:53.457 "digest": "sha384", 00:16:53.457 "dhgroup": "ffdhe2048" 00:16:53.457 } 00:16:53.457 } 00:16:53.457 ]' 00:16:53.457 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:53.457 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:53.457 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:53.715 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:53.715 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:53.715 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.715 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.715 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.972 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:02:Mjg2MjBkZDBhNzBmNjhiNTAxZjQ2MTliMGVmODY3MGU1NzNiNjFhZTgwZWFkM2ViDAKtvg==: --dhchap-ctrl-secret DHHC-1:01:NzhiNTE1OWIzYzY4N2FiMTk3MWMyM2I2Njk5ZDAzYmFDNOZ8: 00:16:54.905 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.905 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.905 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:16:54.905 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.905 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.905 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.905 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:54.905 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:54.905 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:54.905 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:16:54.905 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:54.905 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:54.905 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:54.905 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:54.905 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.905 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key3 00:16:54.905 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.905 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.905 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.905 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:54.905 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:55.471 00:16:55.471 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:55.471 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:55.471 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.729 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.730 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.730 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.730 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.730 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.730 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:55.730 { 00:16:55.730 "cntlid": 63, 00:16:55.730 "qid": 0, 00:16:55.730 "state": "enabled", 00:16:55.730 "thread": "nvmf_tgt_poll_group_000", 00:16:55.730 "listen_address": { 00:16:55.730 "trtype": "TCP", 00:16:55.730 "adrfam": "IPv4", 00:16:55.730 "traddr": "10.0.0.2", 00:16:55.730 "trsvcid": "4420" 00:16:55.730 }, 00:16:55.730 "peer_address": { 00:16:55.730 "trtype": "TCP", 00:16:55.730 "adrfam": "IPv4", 00:16:55.730 "traddr": "10.0.0.1", 00:16:55.730 "trsvcid": "34618" 00:16:55.730 }, 00:16:55.730 "auth": { 00:16:55.730 "state": "completed", 00:16:55.730 "digest": "sha384", 00:16:55.730 "dhgroup": "ffdhe2048" 00:16:55.730 } 00:16:55.730 } 00:16:55.730 ]' 00:16:55.730 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:55.730 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:55.730 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:55.730 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:55.730 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:55.730 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.730 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.730 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.988 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:03:NDE3MWFiZDY1NjNjOGEyN2FlMmIxZTIwMjlkMGY4ZTUwZmRjNWZlMDI1OTkyYThhYzBiMTkyNjZkYzQ5NmQ2Yr2YyeE=: 00:16:56.634 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.634 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.634 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:16:56.634 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.634 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.893 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.893 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:56.893 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:56.893 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:56.893 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:57.152 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:16:57.152 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:57.152 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:57.152 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:57.152 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:57.152 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.152 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.152 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.152 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.152 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.152 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.152 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.411 00:16:57.411 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:57.411 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:57.411 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.669 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.669 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.669 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.669 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.669 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.669 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:57.669 { 00:16:57.669 "cntlid": 65, 00:16:57.669 "qid": 0, 00:16:57.669 "state": "enabled", 00:16:57.669 "thread": "nvmf_tgt_poll_group_000", 00:16:57.669 "listen_address": { 00:16:57.669 "trtype": "TCP", 00:16:57.669 "adrfam": "IPv4", 00:16:57.669 "traddr": "10.0.0.2", 00:16:57.669 "trsvcid": "4420" 00:16:57.669 }, 00:16:57.669 "peer_address": { 00:16:57.669 "trtype": "TCP", 00:16:57.669 "adrfam": "IPv4", 00:16:57.669 "traddr": "10.0.0.1", 00:16:57.669 "trsvcid": "34642" 00:16:57.669 }, 00:16:57.669 "auth": { 00:16:57.669 "state": "completed", 00:16:57.669 "digest": "sha384", 00:16:57.669 "dhgroup": "ffdhe3072" 00:16:57.669 } 00:16:57.669 } 00:16:57.669 ]' 00:16:57.669 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:57.669 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:57.669 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:57.669 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:57.669 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:57.927 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.927 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.927 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.185 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:00:ZWRkOGJkNTdmNjJjNWIxZmE1NmRkNDcwMzg0NjUxY2I5NThhYzMwNjg3ZTI1YWQ2si6sAA==: --dhchap-ctrl-secret DHHC-1:03:NTEzMGFiMWJjMWQ0YzhlMjU2OWIyNWU0MGE5YjdhODBjOTJhY2U0OTY0NDE4ODQ5ZWYyMTRmNmU2ZDU1OGYwMHskUL4=: 00:16:58.751 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.751 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.751 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:16:58.751 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.751 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.751 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.751 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:58.751 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:58.751 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:59.009 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:16:59.009 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:59.009 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:59.009 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:59.009 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:59.009 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.009 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.009 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.009 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.009 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.009 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.009 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.575 00:16:59.575 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:59.575 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.575 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:59.871 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.871 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.871 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.871 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.871 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.871 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:59.871 { 00:16:59.871 "cntlid": 67, 00:16:59.871 "qid": 0, 00:16:59.871 "state": "enabled", 00:16:59.871 "thread": "nvmf_tgt_poll_group_000", 00:16:59.871 "listen_address": { 00:16:59.871 "trtype": "TCP", 00:16:59.871 "adrfam": "IPv4", 00:16:59.871 "traddr": "10.0.0.2", 00:16:59.871 "trsvcid": "4420" 00:16:59.871 }, 00:16:59.871 "peer_address": { 00:16:59.871 "trtype": "TCP", 00:16:59.871 "adrfam": "IPv4", 00:16:59.871 "traddr": "10.0.0.1", 00:16:59.871 "trsvcid": "34662" 00:16:59.871 }, 00:16:59.871 "auth": { 00:16:59.871 "state": "completed", 00:16:59.871 "digest": "sha384", 00:16:59.871 "dhgroup": "ffdhe3072" 00:16:59.871 } 00:16:59.871 } 00:16:59.871 ]' 00:16:59.871 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:59.871 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:59.871 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:59.871 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:59.871 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:59.871 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.871 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.871 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.129 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:01:MGNlYTNjM2JmNWQ1N2JiOTczMTYyNzA5YzllYWIxMTBeGuOF: --dhchap-ctrl-secret DHHC-1:02:M2Q1NTc2ZmNmZjcwMzQwNTY3ZmUxYjdmNDllOGYyMDZjNTcxMDRmYzExZjViY2My55iSBg==: 00:17:00.695 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.953 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.953 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:17:00.953 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.953 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.953 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.953 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:00.953 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:00.953 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:01.211 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:17:01.211 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:01.211 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:01.211 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:01.211 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:01.211 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.211 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.211 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.211 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.211 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.211 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.211 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.468 00:17:01.468 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:01.468 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:01.468 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.726 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.726 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.726 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.726 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.726 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.726 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:01.726 { 00:17:01.726 "cntlid": 69, 00:17:01.726 "qid": 0, 00:17:01.726 "state": "enabled", 00:17:01.726 "thread": "nvmf_tgt_poll_group_000", 00:17:01.726 "listen_address": { 00:17:01.726 "trtype": "TCP", 00:17:01.726 "adrfam": "IPv4", 00:17:01.726 "traddr": "10.0.0.2", 00:17:01.726 "trsvcid": "4420" 00:17:01.726 }, 00:17:01.726 "peer_address": { 00:17:01.726 "trtype": "TCP", 00:17:01.726 "adrfam": "IPv4", 00:17:01.726 "traddr": "10.0.0.1", 00:17:01.726 "trsvcid": "34698" 00:17:01.726 }, 00:17:01.726 "auth": { 00:17:01.726 "state": "completed", 00:17:01.726 "digest": "sha384", 00:17:01.726 "dhgroup": "ffdhe3072" 00:17:01.726 } 00:17:01.726 } 00:17:01.726 ]' 00:17:01.726 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:01.726 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:01.726 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:01.726 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:01.726 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:01.984 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.984 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.984 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.242 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:02:Mjg2MjBkZDBhNzBmNjhiNTAxZjQ2MTliMGVmODY3MGU1NzNiNjFhZTgwZWFkM2ViDAKtvg==: --dhchap-ctrl-secret DHHC-1:01:NzhiNTE1OWIzYzY4N2FiMTk3MWMyM2I2Njk5ZDAzYmFDNOZ8: 00:17:02.859 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.859 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.859 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:17:02.859 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.859 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.859 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.859 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:02.859 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:02.859 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:03.117 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:17:03.117 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:03.117 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:03.117 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:03.117 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:03.117 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.118 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key3 00:17:03.118 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.118 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.118 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.118 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:03.118 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:03.376 00:17:03.376 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:03.376 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:03.376 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.940 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.940 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.940 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.940 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.940 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.940 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:03.940 { 00:17:03.940 "cntlid": 71, 00:17:03.941 "qid": 0, 00:17:03.941 "state": "enabled", 00:17:03.941 "thread": "nvmf_tgt_poll_group_000", 00:17:03.941 "listen_address": { 00:17:03.941 "trtype": "TCP", 00:17:03.941 "adrfam": "IPv4", 00:17:03.941 "traddr": "10.0.0.2", 00:17:03.941 "trsvcid": "4420" 00:17:03.941 }, 00:17:03.941 "peer_address": { 00:17:03.941 "trtype": "TCP", 00:17:03.941 "adrfam": "IPv4", 00:17:03.941 "traddr": "10.0.0.1", 00:17:03.941 "trsvcid": "34724" 00:17:03.941 }, 00:17:03.941 "auth": { 00:17:03.941 "state": "completed", 00:17:03.941 "digest": "sha384", 00:17:03.941 "dhgroup": "ffdhe3072" 00:17:03.941 } 00:17:03.941 } 00:17:03.941 ]' 00:17:03.941 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:03.941 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:03.941 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:03.941 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:03.941 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:03.941 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.941 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.941 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.197 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:03:NDE3MWFiZDY1NjNjOGEyN2FlMmIxZTIwMjlkMGY4ZTUwZmRjNWZlMDI1OTkyYThhYzBiMTkyNjZkYzQ5NmQ2Yr2YyeE=: 00:17:05.130 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.130 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.130 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:17:05.130 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.130 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.130 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.130 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:05.130 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:05.130 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:05.130 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:05.130 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:17:05.130 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:05.130 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:05.130 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:05.130 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:05.130 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.130 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.130 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.130 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.130 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.130 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.130 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.741 00:17:05.741 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:05.742 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:05.742 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.000 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.000 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.000 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.000 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.000 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.000 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:06.000 { 00:17:06.000 "cntlid": 73, 00:17:06.000 "qid": 0, 00:17:06.000 "state": "enabled", 00:17:06.000 "thread": "nvmf_tgt_poll_group_000", 00:17:06.000 "listen_address": { 00:17:06.000 "trtype": "TCP", 00:17:06.000 "adrfam": "IPv4", 00:17:06.000 "traddr": "10.0.0.2", 00:17:06.000 "trsvcid": "4420" 00:17:06.000 }, 00:17:06.000 "peer_address": { 00:17:06.000 "trtype": "TCP", 00:17:06.000 "adrfam": "IPv4", 00:17:06.000 "traddr": "10.0.0.1", 00:17:06.000 "trsvcid": "50034" 00:17:06.000 }, 00:17:06.000 "auth": { 00:17:06.000 "state": "completed", 00:17:06.000 "digest": "sha384", 00:17:06.000 "dhgroup": "ffdhe4096" 00:17:06.000 } 00:17:06.000 } 00:17:06.000 ]' 00:17:06.000 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:06.000 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:06.000 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:06.000 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:06.000 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:06.000 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.000 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.000 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.258 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:00:ZWRkOGJkNTdmNjJjNWIxZmE1NmRkNDcwMzg0NjUxY2I5NThhYzMwNjg3ZTI1YWQ2si6sAA==: --dhchap-ctrl-secret DHHC-1:03:NTEzMGFiMWJjMWQ0YzhlMjU2OWIyNWU0MGE5YjdhODBjOTJhY2U0OTY0NDE4ODQ5ZWYyMTRmNmU2ZDU1OGYwMHskUL4=: 00:17:07.191 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.191 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.191 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:17:07.191 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.191 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.191 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.191 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:07.191 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:07.191 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:07.191 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:17:07.191 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:07.191 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:07.191 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:07.191 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:07.191 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.191 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.191 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.191 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.449 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.449 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.449 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.706 00:17:07.706 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:07.706 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:07.706 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.965 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.965 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.965 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.965 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.965 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.965 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:07.965 { 00:17:07.965 "cntlid": 75, 00:17:07.965 "qid": 0, 00:17:07.965 "state": "enabled", 00:17:07.965 "thread": "nvmf_tgt_poll_group_000", 00:17:07.965 "listen_address": { 00:17:07.965 "trtype": "TCP", 00:17:07.965 "adrfam": "IPv4", 00:17:07.965 "traddr": "10.0.0.2", 00:17:07.965 "trsvcid": "4420" 00:17:07.965 }, 00:17:07.965 "peer_address": { 00:17:07.965 "trtype": "TCP", 00:17:07.965 "adrfam": "IPv4", 00:17:07.965 "traddr": "10.0.0.1", 00:17:07.965 "trsvcid": "50048" 00:17:07.965 }, 00:17:07.965 "auth": { 00:17:07.965 "state": "completed", 00:17:07.965 "digest": "sha384", 00:17:07.965 "dhgroup": "ffdhe4096" 00:17:07.965 } 00:17:07.965 } 00:17:07.965 ]' 00:17:07.965 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:07.965 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:07.965 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:07.965 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:07.965 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:07.965 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.965 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.965 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.223 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:01:MGNlYTNjM2JmNWQ1N2JiOTczMTYyNzA5YzllYWIxMTBeGuOF: --dhchap-ctrl-secret DHHC-1:02:M2Q1NTc2ZmNmZjcwMzQwNTY3ZmUxYjdmNDllOGYyMDZjNTcxMDRmYzExZjViY2My55iSBg==: 00:17:09.156 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.156 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.156 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:17:09.156 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.156 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.156 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.156 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:09.156 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:09.156 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:09.156 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:17:09.156 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:09.156 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:09.156 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:09.156 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:09.156 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.156 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.156 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.156 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.156 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.156 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.156 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.721 00:17:09.721 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:09.721 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:09.721 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.979 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.979 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.979 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.979 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.979 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.979 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:09.979 { 00:17:09.979 "cntlid": 77, 00:17:09.979 "qid": 0, 00:17:09.979 "state": "enabled", 00:17:09.979 "thread": "nvmf_tgt_poll_group_000", 00:17:09.979 "listen_address": { 00:17:09.979 "trtype": "TCP", 00:17:09.979 "adrfam": "IPv4", 00:17:09.979 "traddr": "10.0.0.2", 00:17:09.979 "trsvcid": "4420" 00:17:09.979 }, 00:17:09.979 "peer_address": { 00:17:09.979 "trtype": "TCP", 00:17:09.979 "adrfam": "IPv4", 00:17:09.979 "traddr": "10.0.0.1", 00:17:09.979 "trsvcid": "50080" 00:17:09.979 }, 00:17:09.979 "auth": { 00:17:09.979 "state": "completed", 00:17:09.979 "digest": "sha384", 00:17:09.979 "dhgroup": "ffdhe4096" 00:17:09.979 } 00:17:09.979 } 00:17:09.979 ]' 00:17:09.979 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:09.979 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:09.979 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:09.979 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:09.979 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:09.979 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.979 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.979 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.237 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:02:Mjg2MjBkZDBhNzBmNjhiNTAxZjQ2MTliMGVmODY3MGU1NzNiNjFhZTgwZWFkM2ViDAKtvg==: --dhchap-ctrl-secret DHHC-1:01:NzhiNTE1OWIzYzY4N2FiMTk3MWMyM2I2Njk5ZDAzYmFDNOZ8: 00:17:11.172 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.172 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.172 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:17:11.172 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.172 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.172 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.172 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:11.172 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:11.172 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:11.172 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:17:11.172 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:11.172 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:11.172 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:11.172 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:11.173 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.173 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key3 00:17:11.173 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.173 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.173 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.173 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:11.173 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:11.809 00:17:11.809 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:11.809 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:11.809 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.067 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.067 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.067 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.067 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.067 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.067 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:12.067 { 00:17:12.067 "cntlid": 79, 00:17:12.067 "qid": 0, 00:17:12.067 "state": "enabled", 00:17:12.067 "thread": "nvmf_tgt_poll_group_000", 00:17:12.067 "listen_address": { 00:17:12.067 "trtype": "TCP", 00:17:12.067 "adrfam": "IPv4", 00:17:12.067 "traddr": "10.0.0.2", 00:17:12.067 "trsvcid": "4420" 00:17:12.067 }, 00:17:12.067 "peer_address": { 00:17:12.067 "trtype": "TCP", 00:17:12.067 "adrfam": "IPv4", 00:17:12.067 "traddr": "10.0.0.1", 00:17:12.067 "trsvcid": "50104" 00:17:12.067 }, 00:17:12.067 "auth": { 00:17:12.068 "state": "completed", 00:17:12.068 "digest": "sha384", 00:17:12.068 "dhgroup": "ffdhe4096" 00:17:12.068 } 00:17:12.068 } 00:17:12.068 ]' 00:17:12.068 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:12.068 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:12.068 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:12.068 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:12.068 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:12.068 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.068 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.068 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.326 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:03:NDE3MWFiZDY1NjNjOGEyN2FlMmIxZTIwMjlkMGY4ZTUwZmRjNWZlMDI1OTkyYThhYzBiMTkyNjZkYzQ5NmQ2Yr2YyeE=: 00:17:13.261 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.261 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.261 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:17:13.261 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.261 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.261 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.261 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:13.261 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:13.261 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:13.261 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:13.519 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:17:13.519 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:13.519 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:13.519 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:13.519 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:13.519 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.519 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.519 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.519 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.519 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.519 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.519 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.777 00:17:13.777 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:13.777 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.777 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:14.035 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.036 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.036 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.036 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.036 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.036 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:14.036 { 00:17:14.036 "cntlid": 81, 00:17:14.036 "qid": 0, 00:17:14.036 "state": "enabled", 00:17:14.036 "thread": "nvmf_tgt_poll_group_000", 00:17:14.036 "listen_address": { 00:17:14.036 "trtype": "TCP", 00:17:14.036 "adrfam": "IPv4", 00:17:14.036 "traddr": "10.0.0.2", 00:17:14.036 "trsvcid": "4420" 00:17:14.036 }, 00:17:14.036 "peer_address": { 00:17:14.036 "trtype": "TCP", 00:17:14.036 "adrfam": "IPv4", 00:17:14.036 "traddr": "10.0.0.1", 00:17:14.036 "trsvcid": "50140" 00:17:14.036 }, 00:17:14.036 "auth": { 00:17:14.036 "state": "completed", 00:17:14.036 "digest": "sha384", 00:17:14.036 "dhgroup": "ffdhe6144" 00:17:14.036 } 00:17:14.036 } 00:17:14.036 ]' 00:17:14.036 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:14.294 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:14.294 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:14.294 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:14.294 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:14.294 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.294 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.294 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.552 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:00:ZWRkOGJkNTdmNjJjNWIxZmE1NmRkNDcwMzg0NjUxY2I5NThhYzMwNjg3ZTI1YWQ2si6sAA==: --dhchap-ctrl-secret DHHC-1:03:NTEzMGFiMWJjMWQ0YzhlMjU2OWIyNWU0MGE5YjdhODBjOTJhY2U0OTY0NDE4ODQ5ZWYyMTRmNmU2ZDU1OGYwMHskUL4=: 00:17:15.487 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.487 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.487 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:17:15.487 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.488 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.488 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.488 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:15.488 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:15.488 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:15.488 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:17:15.488 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:15.488 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:15.488 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:15.488 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:15.488 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.488 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.488 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.488 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.488 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.488 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.488 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.055 00:17:16.055 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:16.055 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.055 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:16.313 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.313 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.313 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.313 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.313 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.313 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:16.313 { 00:17:16.313 "cntlid": 83, 00:17:16.313 "qid": 0, 00:17:16.313 "state": "enabled", 00:17:16.313 "thread": "nvmf_tgt_poll_group_000", 00:17:16.313 "listen_address": { 00:17:16.313 "trtype": "TCP", 00:17:16.313 "adrfam": "IPv4", 00:17:16.313 "traddr": "10.0.0.2", 00:17:16.313 "trsvcid": "4420" 00:17:16.313 }, 00:17:16.313 "peer_address": { 00:17:16.313 "trtype": "TCP", 00:17:16.313 "adrfam": "IPv4", 00:17:16.313 "traddr": "10.0.0.1", 00:17:16.313 "trsvcid": "52644" 00:17:16.313 }, 00:17:16.313 "auth": { 00:17:16.313 "state": "completed", 00:17:16.313 "digest": "sha384", 00:17:16.313 "dhgroup": "ffdhe6144" 00:17:16.313 } 00:17:16.313 } 00:17:16.313 ]' 00:17:16.313 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:16.313 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:16.313 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:16.571 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:16.571 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:16.571 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.572 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.572 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.830 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:01:MGNlYTNjM2JmNWQ1N2JiOTczMTYyNzA5YzllYWIxMTBeGuOF: --dhchap-ctrl-secret DHHC-1:02:M2Q1NTc2ZmNmZjcwMzQwNTY3ZmUxYjdmNDllOGYyMDZjNTcxMDRmYzExZjViY2My55iSBg==: 00:17:17.434 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.434 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.434 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:17:17.434 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.434 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.692 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.692 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:17.692 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:17.692 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:17.692 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:17:17.692 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:17.692 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:17.692 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:17.692 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:17.692 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.692 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.692 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.692 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.951 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.951 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.951 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.209 00:17:18.209 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:18.209 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.209 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:18.775 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.775 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.775 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.775 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.775 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.775 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:18.775 { 00:17:18.775 "cntlid": 85, 00:17:18.775 "qid": 0, 00:17:18.775 "state": "enabled", 00:17:18.775 "thread": "nvmf_tgt_poll_group_000", 00:17:18.775 "listen_address": { 00:17:18.775 "trtype": "TCP", 00:17:18.775 "adrfam": "IPv4", 00:17:18.775 "traddr": "10.0.0.2", 00:17:18.775 "trsvcid": "4420" 00:17:18.775 }, 00:17:18.775 "peer_address": { 00:17:18.775 "trtype": "TCP", 00:17:18.775 "adrfam": "IPv4", 00:17:18.775 "traddr": "10.0.0.1", 00:17:18.775 "trsvcid": "52660" 00:17:18.775 }, 00:17:18.775 "auth": { 00:17:18.775 "state": "completed", 00:17:18.775 "digest": "sha384", 00:17:18.775 "dhgroup": "ffdhe6144" 00:17:18.775 } 00:17:18.775 } 00:17:18.775 ]' 00:17:18.775 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:18.775 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:18.775 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:18.775 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:18.775 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:18.775 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.775 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.775 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.033 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:02:Mjg2MjBkZDBhNzBmNjhiNTAxZjQ2MTliMGVmODY3MGU1NzNiNjFhZTgwZWFkM2ViDAKtvg==: --dhchap-ctrl-secret DHHC-1:01:NzhiNTE1OWIzYzY4N2FiMTk3MWMyM2I2Njk5ZDAzYmFDNOZ8: 00:17:19.967 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.967 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.967 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:17:19.967 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.967 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.967 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.967 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:19.967 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:19.967 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:19.967 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:17:19.967 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:19.967 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:19.967 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:19.967 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:19.967 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.967 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key3 00:17:19.967 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.967 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.967 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.967 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:19.967 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:20.534 00:17:20.534 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:20.534 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.534 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:20.792 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.792 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.792 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.792 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.792 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.792 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:20.792 { 00:17:20.792 "cntlid": 87, 00:17:20.792 "qid": 0, 00:17:20.792 "state": "enabled", 00:17:20.792 "thread": "nvmf_tgt_poll_group_000", 00:17:20.792 "listen_address": { 00:17:20.792 "trtype": "TCP", 00:17:20.792 "adrfam": "IPv4", 00:17:20.792 "traddr": "10.0.0.2", 00:17:20.792 "trsvcid": "4420" 00:17:20.792 }, 00:17:20.792 "peer_address": { 00:17:20.792 "trtype": "TCP", 00:17:20.792 "adrfam": "IPv4", 00:17:20.792 "traddr": "10.0.0.1", 00:17:20.792 "trsvcid": "52682" 00:17:20.792 }, 00:17:20.792 "auth": { 00:17:20.792 "state": "completed", 00:17:20.792 "digest": "sha384", 00:17:20.792 "dhgroup": "ffdhe6144" 00:17:20.792 } 00:17:20.792 } 00:17:20.792 ]' 00:17:20.792 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:20.792 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:20.792 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:20.792 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:20.792 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:20.792 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.792 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.792 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.050 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:03:NDE3MWFiZDY1NjNjOGEyN2FlMmIxZTIwMjlkMGY4ZTUwZmRjNWZlMDI1OTkyYThhYzBiMTkyNjZkYzQ5NmQ2Yr2YyeE=: 00:17:21.984 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.984 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.984 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:17:21.984 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.984 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.984 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.984 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:21.984 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:21.984 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:21.984 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:22.242 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:17:22.242 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:22.242 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:22.242 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:22.242 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:22.242 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.242 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.242 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.242 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.242 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.242 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.242 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.808 00:17:22.808 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:22.808 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:22.808 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.128 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.128 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.128 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.128 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.128 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.128 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:23.128 { 00:17:23.128 "cntlid": 89, 00:17:23.128 "qid": 0, 00:17:23.128 "state": "enabled", 00:17:23.128 "thread": "nvmf_tgt_poll_group_000", 00:17:23.128 "listen_address": { 00:17:23.128 "trtype": "TCP", 00:17:23.128 "adrfam": "IPv4", 00:17:23.128 "traddr": "10.0.0.2", 00:17:23.128 "trsvcid": "4420" 00:17:23.128 }, 00:17:23.128 "peer_address": { 00:17:23.128 "trtype": "TCP", 00:17:23.128 "adrfam": "IPv4", 00:17:23.128 "traddr": "10.0.0.1", 00:17:23.128 "trsvcid": "52708" 00:17:23.128 }, 00:17:23.128 "auth": { 00:17:23.128 "state": "completed", 00:17:23.128 "digest": "sha384", 00:17:23.128 "dhgroup": "ffdhe8192" 00:17:23.128 } 00:17:23.128 } 00:17:23.128 ]' 00:17:23.128 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:23.128 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:23.128 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:23.128 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:23.128 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:23.128 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.128 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.129 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.696 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:00:ZWRkOGJkNTdmNjJjNWIxZmE1NmRkNDcwMzg0NjUxY2I5NThhYzMwNjg3ZTI1YWQ2si6sAA==: --dhchap-ctrl-secret DHHC-1:03:NTEzMGFiMWJjMWQ0YzhlMjU2OWIyNWU0MGE5YjdhODBjOTJhY2U0OTY0NDE4ODQ5ZWYyMTRmNmU2ZDU1OGYwMHskUL4=: 00:17:24.262 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.262 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.262 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:17:24.262 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.262 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.262 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.262 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:24.262 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:24.262 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:24.520 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:17:24.520 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:24.520 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:24.520 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:24.520 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:24.520 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.520 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.520 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.521 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.521 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.521 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.521 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.086 00:17:25.086 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:25.086 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:25.086 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.344 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.344 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.344 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.344 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.344 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.344 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:25.344 { 00:17:25.344 "cntlid": 91, 00:17:25.344 "qid": 0, 00:17:25.344 "state": "enabled", 00:17:25.344 "thread": "nvmf_tgt_poll_group_000", 00:17:25.344 "listen_address": { 00:17:25.344 "trtype": "TCP", 00:17:25.344 "adrfam": "IPv4", 00:17:25.344 "traddr": "10.0.0.2", 00:17:25.344 "trsvcid": "4420" 00:17:25.344 }, 00:17:25.344 "peer_address": { 00:17:25.344 "trtype": "TCP", 00:17:25.344 "adrfam": "IPv4", 00:17:25.344 "traddr": "10.0.0.1", 00:17:25.344 "trsvcid": "35222" 00:17:25.344 }, 00:17:25.344 "auth": { 00:17:25.344 "state": "completed", 00:17:25.344 "digest": "sha384", 00:17:25.344 "dhgroup": "ffdhe8192" 00:17:25.344 } 00:17:25.344 } 00:17:25.344 ]' 00:17:25.344 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:25.603 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:25.603 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:25.603 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:25.603 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:25.603 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.603 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.603 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.861 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:01:MGNlYTNjM2JmNWQ1N2JiOTczMTYyNzA5YzllYWIxMTBeGuOF: --dhchap-ctrl-secret DHHC-1:02:M2Q1NTc2ZmNmZjcwMzQwNTY3ZmUxYjdmNDllOGYyMDZjNTcxMDRmYzExZjViY2My55iSBg==: 00:17:26.428 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.428 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.428 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:17:26.428 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.428 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.428 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.428 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:26.428 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:26.428 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:26.687 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:17:26.687 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:26.687 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:26.687 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:26.687 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:26.687 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.687 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.687 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.687 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.687 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.687 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.687 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.253 00:17:27.253 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:27.253 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:27.253 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.819 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.819 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.819 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.819 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.819 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.819 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:27.819 { 00:17:27.819 "cntlid": 93, 00:17:27.819 "qid": 0, 00:17:27.819 "state": "enabled", 00:17:27.819 "thread": "nvmf_tgt_poll_group_000", 00:17:27.819 "listen_address": { 00:17:27.819 "trtype": "TCP", 00:17:27.819 "adrfam": "IPv4", 00:17:27.819 "traddr": "10.0.0.2", 00:17:27.819 "trsvcid": "4420" 00:17:27.819 }, 00:17:27.819 "peer_address": { 00:17:27.819 "trtype": "TCP", 00:17:27.819 "adrfam": "IPv4", 00:17:27.819 "traddr": "10.0.0.1", 00:17:27.819 "trsvcid": "35258" 00:17:27.819 }, 00:17:27.819 "auth": { 00:17:27.819 "state": "completed", 00:17:27.819 "digest": "sha384", 00:17:27.819 "dhgroup": "ffdhe8192" 00:17:27.819 } 00:17:27.819 } 00:17:27.819 ]' 00:17:27.819 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:27.819 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:27.819 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:27.819 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:27.819 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:27.819 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.819 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.819 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.076 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:02:Mjg2MjBkZDBhNzBmNjhiNTAxZjQ2MTliMGVmODY3MGU1NzNiNjFhZTgwZWFkM2ViDAKtvg==: --dhchap-ctrl-secret DHHC-1:01:NzhiNTE1OWIzYzY4N2FiMTk3MWMyM2I2Njk5ZDAzYmFDNOZ8: 00:17:28.664 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.664 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.664 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:17:28.664 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.664 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.922 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.922 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:28.922 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:28.922 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:29.180 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:17:29.180 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:29.180 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:29.180 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:29.180 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:29.180 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.180 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key3 00:17:29.180 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.180 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.180 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.180 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:29.180 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:29.748 00:17:29.748 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:29.748 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:29.748 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.006 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.006 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.006 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.006 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.006 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.006 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:30.006 { 00:17:30.006 "cntlid": 95, 00:17:30.006 "qid": 0, 00:17:30.006 "state": "enabled", 00:17:30.006 "thread": "nvmf_tgt_poll_group_000", 00:17:30.006 "listen_address": { 00:17:30.006 "trtype": "TCP", 00:17:30.006 "adrfam": "IPv4", 00:17:30.006 "traddr": "10.0.0.2", 00:17:30.006 "trsvcid": "4420" 00:17:30.006 }, 00:17:30.006 "peer_address": { 00:17:30.006 "trtype": "TCP", 00:17:30.006 "adrfam": "IPv4", 00:17:30.006 "traddr": "10.0.0.1", 00:17:30.006 "trsvcid": "35270" 00:17:30.006 }, 00:17:30.006 "auth": { 00:17:30.006 "state": "completed", 00:17:30.006 "digest": "sha384", 00:17:30.006 "dhgroup": "ffdhe8192" 00:17:30.006 } 00:17:30.006 } 00:17:30.006 ]' 00:17:30.006 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:30.006 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:30.006 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:30.265 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:30.265 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:30.265 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.265 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.265 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.523 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:03:NDE3MWFiZDY1NjNjOGEyN2FlMmIxZTIwMjlkMGY4ZTUwZmRjNWZlMDI1OTkyYThhYzBiMTkyNjZkYzQ5NmQ2Yr2YyeE=: 00:17:31.090 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.090 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.090 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:17:31.090 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.090 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.090 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.090 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:31.090 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:31.090 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:31.090 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:31.090 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:31.349 18:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:17:31.349 18:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:31.349 18:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:31.349 18:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:31.349 18:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:31.349 18:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.349 18:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.349 18:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.349 18:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.349 18:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.349 18:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.349 18:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.915 00:17:31.915 18:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:31.915 18:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.915 18:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:32.173 18:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.173 18:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.173 18:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.173 18:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.173 18:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.173 18:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:32.173 { 00:17:32.173 "cntlid": 97, 00:17:32.173 "qid": 0, 00:17:32.173 "state": "enabled", 00:17:32.173 "thread": "nvmf_tgt_poll_group_000", 00:17:32.173 "listen_address": { 00:17:32.173 "trtype": "TCP", 00:17:32.173 "adrfam": "IPv4", 00:17:32.173 "traddr": "10.0.0.2", 00:17:32.173 "trsvcid": "4420" 00:17:32.173 }, 00:17:32.173 "peer_address": { 00:17:32.173 "trtype": "TCP", 00:17:32.173 "adrfam": "IPv4", 00:17:32.173 "traddr": "10.0.0.1", 00:17:32.173 "trsvcid": "35298" 00:17:32.173 }, 00:17:32.173 "auth": { 00:17:32.173 "state": "completed", 00:17:32.173 "digest": "sha512", 00:17:32.173 "dhgroup": "null" 00:17:32.173 } 00:17:32.173 } 00:17:32.173 ]' 00:17:32.173 18:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:32.173 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:32.173 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:32.173 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:32.173 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:32.173 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.173 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.173 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.432 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:00:ZWRkOGJkNTdmNjJjNWIxZmE1NmRkNDcwMzg0NjUxY2I5NThhYzMwNjg3ZTI1YWQ2si6sAA==: --dhchap-ctrl-secret DHHC-1:03:NTEzMGFiMWJjMWQ0YzhlMjU2OWIyNWU0MGE5YjdhODBjOTJhY2U0OTY0NDE4ODQ5ZWYyMTRmNmU2ZDU1OGYwMHskUL4=: 00:17:33.000 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.000 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.000 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:17:33.000 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.000 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.258 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.258 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:33.258 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:33.258 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:33.516 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:17:33.516 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:33.516 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:33.516 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:33.516 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:33.516 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.516 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.516 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.516 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.516 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.516 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.516 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.774 00:17:33.774 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:33.774 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.774 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:34.033 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.033 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.033 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.033 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.033 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.033 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:34.033 { 00:17:34.033 "cntlid": 99, 00:17:34.033 "qid": 0, 00:17:34.033 "state": "enabled", 00:17:34.033 "thread": "nvmf_tgt_poll_group_000", 00:17:34.033 "listen_address": { 00:17:34.033 "trtype": "TCP", 00:17:34.033 "adrfam": "IPv4", 00:17:34.033 "traddr": "10.0.0.2", 00:17:34.033 "trsvcid": "4420" 00:17:34.033 }, 00:17:34.033 "peer_address": { 00:17:34.033 "trtype": "TCP", 00:17:34.033 "adrfam": "IPv4", 00:17:34.033 "traddr": "10.0.0.1", 00:17:34.033 "trsvcid": "35320" 00:17:34.033 }, 00:17:34.033 "auth": { 00:17:34.033 "state": "completed", 00:17:34.033 "digest": "sha512", 00:17:34.033 "dhgroup": "null" 00:17:34.033 } 00:17:34.033 } 00:17:34.033 ]' 00:17:34.033 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:34.033 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:34.033 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:34.033 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:34.033 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:34.033 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.033 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.033 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.291 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:01:MGNlYTNjM2JmNWQ1N2JiOTczMTYyNzA5YzllYWIxMTBeGuOF: --dhchap-ctrl-secret DHHC-1:02:M2Q1NTc2ZmNmZjcwMzQwNTY3ZmUxYjdmNDllOGYyMDZjNTcxMDRmYzExZjViY2My55iSBg==: 00:17:35.229 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.229 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.229 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:17:35.229 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.229 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.229 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.229 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:35.229 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:35.229 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:35.229 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:17:35.229 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:35.229 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:35.229 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:35.229 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:35.229 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.229 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.229 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.229 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.229 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.229 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.229 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.796 00:17:35.796 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:35.796 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:35.796 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.796 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.796 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.796 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.796 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.796 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.796 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:35.796 { 00:17:35.796 "cntlid": 101, 00:17:35.796 "qid": 0, 00:17:35.796 "state": "enabled", 00:17:35.796 "thread": "nvmf_tgt_poll_group_000", 00:17:35.796 "listen_address": { 00:17:35.796 "trtype": "TCP", 00:17:35.796 "adrfam": "IPv4", 00:17:35.796 "traddr": "10.0.0.2", 00:17:35.796 "trsvcid": "4420" 00:17:35.796 }, 00:17:35.796 "peer_address": { 00:17:35.796 "trtype": "TCP", 00:17:35.796 "adrfam": "IPv4", 00:17:35.796 "traddr": "10.0.0.1", 00:17:35.796 "trsvcid": "45738" 00:17:35.796 }, 00:17:35.796 "auth": { 00:17:35.796 "state": "completed", 00:17:35.796 "digest": "sha512", 00:17:35.796 "dhgroup": "null" 00:17:35.796 } 00:17:35.796 } 00:17:35.796 ]' 00:17:35.796 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:36.054 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:36.054 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:36.054 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:36.054 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:36.054 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.054 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.054 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.311 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:02:Mjg2MjBkZDBhNzBmNjhiNTAxZjQ2MTliMGVmODY3MGU1NzNiNjFhZTgwZWFkM2ViDAKtvg==: --dhchap-ctrl-secret DHHC-1:01:NzhiNTE1OWIzYzY4N2FiMTk3MWMyM2I2Njk5ZDAzYmFDNOZ8: 00:17:37.272 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.272 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.272 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:17:37.272 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.272 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.272 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.272 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:37.272 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:37.272 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:37.272 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:17:37.272 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:37.272 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:37.272 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:37.272 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:37.272 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.272 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key3 00:17:37.272 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.272 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.272 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.272 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:37.272 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:37.530 00:17:37.530 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:37.530 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:37.530 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.095 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.096 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.096 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.096 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.096 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.096 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:38.096 { 00:17:38.096 "cntlid": 103, 00:17:38.096 "qid": 0, 00:17:38.096 "state": "enabled", 00:17:38.096 "thread": "nvmf_tgt_poll_group_000", 00:17:38.096 "listen_address": { 00:17:38.096 "trtype": "TCP", 00:17:38.096 "adrfam": "IPv4", 00:17:38.096 "traddr": "10.0.0.2", 00:17:38.096 "trsvcid": "4420" 00:17:38.096 }, 00:17:38.096 "peer_address": { 00:17:38.096 "trtype": "TCP", 00:17:38.096 "adrfam": "IPv4", 00:17:38.096 "traddr": "10.0.0.1", 00:17:38.096 "trsvcid": "45764" 00:17:38.096 }, 00:17:38.096 "auth": { 00:17:38.096 "state": "completed", 00:17:38.096 "digest": "sha512", 00:17:38.096 "dhgroup": "null" 00:17:38.096 } 00:17:38.096 } 00:17:38.096 ]' 00:17:38.096 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:38.096 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:38.096 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:38.096 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:38.096 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:38.096 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.096 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.096 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.353 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:03:NDE3MWFiZDY1NjNjOGEyN2FlMmIxZTIwMjlkMGY4ZTUwZmRjNWZlMDI1OTkyYThhYzBiMTkyNjZkYzQ5NmQ2Yr2YyeE=: 00:17:38.918 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.183 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.183 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:17:39.183 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.183 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.183 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.183 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:39.183 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:39.183 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:39.183 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:39.183 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:17:39.183 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:39.183 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:39.183 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:39.183 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:39.183 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.183 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.183 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.183 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.183 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.183 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.183 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.748 00:17:39.748 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:39.748 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.748 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:40.007 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.007 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.007 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.007 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.007 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.007 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:40.007 { 00:17:40.007 "cntlid": 105, 00:17:40.007 "qid": 0, 00:17:40.007 "state": "enabled", 00:17:40.007 "thread": "nvmf_tgt_poll_group_000", 00:17:40.007 "listen_address": { 00:17:40.007 "trtype": "TCP", 00:17:40.007 "adrfam": "IPv4", 00:17:40.007 "traddr": "10.0.0.2", 00:17:40.007 "trsvcid": "4420" 00:17:40.007 }, 00:17:40.007 "peer_address": { 00:17:40.007 "trtype": "TCP", 00:17:40.007 "adrfam": "IPv4", 00:17:40.007 "traddr": "10.0.0.1", 00:17:40.007 "trsvcid": "45790" 00:17:40.007 }, 00:17:40.007 "auth": { 00:17:40.007 "state": "completed", 00:17:40.007 "digest": "sha512", 00:17:40.007 "dhgroup": "ffdhe2048" 00:17:40.007 } 00:17:40.007 } 00:17:40.007 ]' 00:17:40.007 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:40.007 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:40.007 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:40.007 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:40.007 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:40.265 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.265 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.265 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.523 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:00:ZWRkOGJkNTdmNjJjNWIxZmE1NmRkNDcwMzg0NjUxY2I5NThhYzMwNjg3ZTI1YWQ2si6sAA==: --dhchap-ctrl-secret DHHC-1:03:NTEzMGFiMWJjMWQ0YzhlMjU2OWIyNWU0MGE5YjdhODBjOTJhY2U0OTY0NDE4ODQ5ZWYyMTRmNmU2ZDU1OGYwMHskUL4=: 00:17:41.091 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.091 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.091 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:17:41.091 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.091 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.091 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.091 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:41.091 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:41.091 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:41.350 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:17:41.350 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:41.350 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:41.350 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:41.350 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:41.350 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.350 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.350 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.350 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.350 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.350 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.350 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.917 00:17:41.917 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:41.917 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.917 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:42.175 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.175 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.175 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.175 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.175 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.175 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:42.175 { 00:17:42.175 "cntlid": 107, 00:17:42.175 "qid": 0, 00:17:42.175 "state": "enabled", 00:17:42.175 "thread": "nvmf_tgt_poll_group_000", 00:17:42.175 "listen_address": { 00:17:42.175 "trtype": "TCP", 00:17:42.175 "adrfam": "IPv4", 00:17:42.175 "traddr": "10.0.0.2", 00:17:42.175 "trsvcid": "4420" 00:17:42.175 }, 00:17:42.175 "peer_address": { 00:17:42.175 "trtype": "TCP", 00:17:42.175 "adrfam": "IPv4", 00:17:42.175 "traddr": "10.0.0.1", 00:17:42.175 "trsvcid": "45820" 00:17:42.175 }, 00:17:42.175 "auth": { 00:17:42.175 "state": "completed", 00:17:42.175 "digest": "sha512", 00:17:42.175 "dhgroup": "ffdhe2048" 00:17:42.175 } 00:17:42.175 } 00:17:42.175 ]' 00:17:42.175 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:42.175 18:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:42.175 18:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:42.175 18:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:42.175 18:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:42.175 18:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.175 18:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.175 18:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.433 18:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:01:MGNlYTNjM2JmNWQ1N2JiOTczMTYyNzA5YzllYWIxMTBeGuOF: --dhchap-ctrl-secret DHHC-1:02:M2Q1NTc2ZmNmZjcwMzQwNTY3ZmUxYjdmNDllOGYyMDZjNTcxMDRmYzExZjViY2My55iSBg==: 00:17:43.392 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.392 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.392 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:17:43.392 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.392 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.392 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.392 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:43.392 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:43.392 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:43.650 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:17:43.650 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:43.650 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:43.650 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:43.650 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:43.650 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.650 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.650 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.650 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.650 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.650 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.650 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.908 00:17:43.908 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:43.908 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:43.908 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.166 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.166 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.166 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.166 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.166 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.166 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:44.166 { 00:17:44.166 "cntlid": 109, 00:17:44.166 "qid": 0, 00:17:44.166 "state": "enabled", 00:17:44.166 "thread": "nvmf_tgt_poll_group_000", 00:17:44.166 "listen_address": { 00:17:44.166 "trtype": "TCP", 00:17:44.166 "adrfam": "IPv4", 00:17:44.166 "traddr": "10.0.0.2", 00:17:44.166 "trsvcid": "4420" 00:17:44.166 }, 00:17:44.166 "peer_address": { 00:17:44.166 "trtype": "TCP", 00:17:44.166 "adrfam": "IPv4", 00:17:44.166 "traddr": "10.0.0.1", 00:17:44.166 "trsvcid": "43212" 00:17:44.166 }, 00:17:44.166 "auth": { 00:17:44.166 "state": "completed", 00:17:44.166 "digest": "sha512", 00:17:44.166 "dhgroup": "ffdhe2048" 00:17:44.166 } 00:17:44.166 } 00:17:44.166 ]' 00:17:44.166 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:44.166 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:44.166 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:44.166 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:44.166 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:44.166 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.166 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.166 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.732 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:02:Mjg2MjBkZDBhNzBmNjhiNTAxZjQ2MTliMGVmODY3MGU1NzNiNjFhZTgwZWFkM2ViDAKtvg==: --dhchap-ctrl-secret DHHC-1:01:NzhiNTE1OWIzYzY4N2FiMTk3MWMyM2I2Njk5ZDAzYmFDNOZ8: 00:17:45.298 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.298 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.298 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:17:45.298 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.298 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.298 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.298 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:45.298 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:45.298 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:45.555 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:17:45.555 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:45.555 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:45.555 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:45.555 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:45.555 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.555 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key3 00:17:45.555 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.555 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.555 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.555 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:45.555 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:45.813 00:17:45.813 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:45.813 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.813 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:46.071 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.071 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.071 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.071 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.071 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.071 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:46.071 { 00:17:46.071 "cntlid": 111, 00:17:46.071 "qid": 0, 00:17:46.071 "state": "enabled", 00:17:46.071 "thread": "nvmf_tgt_poll_group_000", 00:17:46.071 "listen_address": { 00:17:46.071 "trtype": "TCP", 00:17:46.071 "adrfam": "IPv4", 00:17:46.071 "traddr": "10.0.0.2", 00:17:46.071 "trsvcid": "4420" 00:17:46.071 }, 00:17:46.071 "peer_address": { 00:17:46.071 "trtype": "TCP", 00:17:46.071 "adrfam": "IPv4", 00:17:46.071 "traddr": "10.0.0.1", 00:17:46.071 "trsvcid": "43234" 00:17:46.071 }, 00:17:46.071 "auth": { 00:17:46.071 "state": "completed", 00:17:46.071 "digest": "sha512", 00:17:46.071 "dhgroup": "ffdhe2048" 00:17:46.071 } 00:17:46.071 } 00:17:46.071 ]' 00:17:46.071 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:46.329 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:46.329 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:46.329 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:46.329 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:46.329 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.329 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.329 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.587 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:03:NDE3MWFiZDY1NjNjOGEyN2FlMmIxZTIwMjlkMGY4ZTUwZmRjNWZlMDI1OTkyYThhYzBiMTkyNjZkYzQ5NmQ2Yr2YyeE=: 00:17:47.153 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.153 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.153 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:17:47.153 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.153 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.153 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.153 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:47.153 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:47.153 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:47.153 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:47.718 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:17:47.718 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:47.718 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:47.718 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:47.718 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:47.718 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.718 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.718 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.718 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.718 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.718 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.718 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.975 00:17:47.975 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:47.975 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.975 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:48.233 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.233 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.233 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.233 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.233 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.233 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:48.233 { 00:17:48.233 "cntlid": 113, 00:17:48.233 "qid": 0, 00:17:48.233 "state": "enabled", 00:17:48.233 "thread": "nvmf_tgt_poll_group_000", 00:17:48.233 "listen_address": { 00:17:48.233 "trtype": "TCP", 00:17:48.233 "adrfam": "IPv4", 00:17:48.233 "traddr": "10.0.0.2", 00:17:48.233 "trsvcid": "4420" 00:17:48.233 }, 00:17:48.233 "peer_address": { 00:17:48.233 "trtype": "TCP", 00:17:48.233 "adrfam": "IPv4", 00:17:48.233 "traddr": "10.0.0.1", 00:17:48.233 "trsvcid": "43266" 00:17:48.233 }, 00:17:48.233 "auth": { 00:17:48.233 "state": "completed", 00:17:48.233 "digest": "sha512", 00:17:48.233 "dhgroup": "ffdhe3072" 00:17:48.233 } 00:17:48.233 } 00:17:48.233 ]' 00:17:48.233 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:48.233 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:48.233 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:48.233 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:48.233 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:48.233 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.233 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.233 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.491 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:00:ZWRkOGJkNTdmNjJjNWIxZmE1NmRkNDcwMzg0NjUxY2I5NThhYzMwNjg3ZTI1YWQ2si6sAA==: --dhchap-ctrl-secret DHHC-1:03:NTEzMGFiMWJjMWQ0YzhlMjU2OWIyNWU0MGE5YjdhODBjOTJhY2U0OTY0NDE4ODQ5ZWYyMTRmNmU2ZDU1OGYwMHskUL4=: 00:17:49.057 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.316 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.316 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:17:49.316 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.316 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.316 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.316 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:49.316 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:49.316 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:49.575 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:17:49.575 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:49.575 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:49.575 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:49.575 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:49.575 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.575 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.575 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.575 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.575 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.575 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.575 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.834 00:17:49.834 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:49.834 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:49.834 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.092 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.092 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.092 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.092 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.092 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.092 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:50.092 { 00:17:50.092 "cntlid": 115, 00:17:50.092 "qid": 0, 00:17:50.092 "state": "enabled", 00:17:50.092 "thread": "nvmf_tgt_poll_group_000", 00:17:50.092 "listen_address": { 00:17:50.092 "trtype": "TCP", 00:17:50.092 "adrfam": "IPv4", 00:17:50.092 "traddr": "10.0.0.2", 00:17:50.092 "trsvcid": "4420" 00:17:50.092 }, 00:17:50.092 "peer_address": { 00:17:50.092 "trtype": "TCP", 00:17:50.092 "adrfam": "IPv4", 00:17:50.092 "traddr": "10.0.0.1", 00:17:50.092 "trsvcid": "43296" 00:17:50.092 }, 00:17:50.092 "auth": { 00:17:50.092 "state": "completed", 00:17:50.092 "digest": "sha512", 00:17:50.092 "dhgroup": "ffdhe3072" 00:17:50.092 } 00:17:50.092 } 00:17:50.092 ]' 00:17:50.092 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:50.092 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:50.092 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:50.350 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:50.350 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:50.350 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.350 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.350 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.608 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:01:MGNlYTNjM2JmNWQ1N2JiOTczMTYyNzA5YzllYWIxMTBeGuOF: --dhchap-ctrl-secret DHHC-1:02:M2Q1NTc2ZmNmZjcwMzQwNTY3ZmUxYjdmNDllOGYyMDZjNTcxMDRmYzExZjViY2My55iSBg==: 00:17:51.173 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.173 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.173 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:17:51.173 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.173 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.173 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.173 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:51.173 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:51.173 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:51.432 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:17:51.432 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:51.432 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:51.432 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:51.432 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:51.432 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.432 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.432 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.432 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.432 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.432 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.432 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.998 00:17:51.998 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:51.998 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:51.998 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.256 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.256 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.256 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.256 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.256 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.256 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:52.256 { 00:17:52.256 "cntlid": 117, 00:17:52.256 "qid": 0, 00:17:52.256 "state": "enabled", 00:17:52.256 "thread": "nvmf_tgt_poll_group_000", 00:17:52.256 "listen_address": { 00:17:52.256 "trtype": "TCP", 00:17:52.256 "adrfam": "IPv4", 00:17:52.256 "traddr": "10.0.0.2", 00:17:52.256 "trsvcid": "4420" 00:17:52.256 }, 00:17:52.256 "peer_address": { 00:17:52.256 "trtype": "TCP", 00:17:52.256 "adrfam": "IPv4", 00:17:52.256 "traddr": "10.0.0.1", 00:17:52.256 "trsvcid": "43316" 00:17:52.256 }, 00:17:52.256 "auth": { 00:17:52.256 "state": "completed", 00:17:52.256 "digest": "sha512", 00:17:52.256 "dhgroup": "ffdhe3072" 00:17:52.256 } 00:17:52.256 } 00:17:52.256 ]' 00:17:52.256 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:52.256 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:52.256 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:52.256 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:52.256 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:52.256 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.256 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.256 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.514 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:02:Mjg2MjBkZDBhNzBmNjhiNTAxZjQ2MTliMGVmODY3MGU1NzNiNjFhZTgwZWFkM2ViDAKtvg==: --dhchap-ctrl-secret DHHC-1:01:NzhiNTE1OWIzYzY4N2FiMTk3MWMyM2I2Njk5ZDAzYmFDNOZ8: 00:17:53.080 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.080 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.080 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:17:53.080 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.080 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.339 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.339 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:53.339 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:53.339 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:53.597 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:17:53.597 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:53.597 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:53.597 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:53.597 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:53.597 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.597 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key3 00:17:53.597 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.597 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.597 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.597 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:53.597 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:53.854 00:17:53.854 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:53.854 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:53.854 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.113 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.113 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.113 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.113 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.113 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.113 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:54.113 { 00:17:54.113 "cntlid": 119, 00:17:54.113 "qid": 0, 00:17:54.113 "state": "enabled", 00:17:54.113 "thread": "nvmf_tgt_poll_group_000", 00:17:54.113 "listen_address": { 00:17:54.113 "trtype": "TCP", 00:17:54.113 "adrfam": "IPv4", 00:17:54.113 "traddr": "10.0.0.2", 00:17:54.113 "trsvcid": "4420" 00:17:54.113 }, 00:17:54.113 "peer_address": { 00:17:54.113 "trtype": "TCP", 00:17:54.113 "adrfam": "IPv4", 00:17:54.113 "traddr": "10.0.0.1", 00:17:54.113 "trsvcid": "58348" 00:17:54.113 }, 00:17:54.113 "auth": { 00:17:54.113 "state": "completed", 00:17:54.113 "digest": "sha512", 00:17:54.113 "dhgroup": "ffdhe3072" 00:17:54.113 } 00:17:54.113 } 00:17:54.113 ]' 00:17:54.113 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:54.113 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:54.113 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:54.113 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:54.113 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:54.113 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.113 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.113 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.371 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:03:NDE3MWFiZDY1NjNjOGEyN2FlMmIxZTIwMjlkMGY4ZTUwZmRjNWZlMDI1OTkyYThhYzBiMTkyNjZkYzQ5NmQ2Yr2YyeE=: 00:17:55.306 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.306 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.306 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:17:55.306 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.306 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.306 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.306 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:55.306 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:55.306 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:55.306 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:55.306 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:17:55.306 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:55.306 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:55.306 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:55.306 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:55.306 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.306 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.306 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.306 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.306 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.306 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.306 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.883 00:17:55.883 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:55.883 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:55.883 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.150 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.150 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.150 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.150 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.150 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.150 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:56.150 { 00:17:56.150 "cntlid": 121, 00:17:56.150 "qid": 0, 00:17:56.150 "state": "enabled", 00:17:56.150 "thread": "nvmf_tgt_poll_group_000", 00:17:56.150 "listen_address": { 00:17:56.150 "trtype": "TCP", 00:17:56.150 "adrfam": "IPv4", 00:17:56.150 "traddr": "10.0.0.2", 00:17:56.150 "trsvcid": "4420" 00:17:56.150 }, 00:17:56.150 "peer_address": { 00:17:56.150 "trtype": "TCP", 00:17:56.150 "adrfam": "IPv4", 00:17:56.150 "traddr": "10.0.0.1", 00:17:56.150 "trsvcid": "58372" 00:17:56.150 }, 00:17:56.150 "auth": { 00:17:56.150 "state": "completed", 00:17:56.150 "digest": "sha512", 00:17:56.150 "dhgroup": "ffdhe4096" 00:17:56.150 } 00:17:56.150 } 00:17:56.150 ]' 00:17:56.150 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:56.150 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:56.150 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:56.150 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:56.150 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:56.150 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.150 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.150 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.409 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:00:ZWRkOGJkNTdmNjJjNWIxZmE1NmRkNDcwMzg0NjUxY2I5NThhYzMwNjg3ZTI1YWQ2si6sAA==: --dhchap-ctrl-secret DHHC-1:03:NTEzMGFiMWJjMWQ0YzhlMjU2OWIyNWU0MGE5YjdhODBjOTJhY2U0OTY0NDE4ODQ5ZWYyMTRmNmU2ZDU1OGYwMHskUL4=: 00:17:57.344 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.344 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.344 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:17:57.344 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.344 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.344 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.344 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:57.344 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:57.344 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:57.602 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:17:57.602 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:57.602 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:57.602 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:57.602 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:57.602 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.602 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.602 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.602 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.602 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.602 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.602 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.862 00:17:57.862 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:57.862 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.862 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:58.149 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.149 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.149 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.149 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.149 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.149 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:58.149 { 00:17:58.149 "cntlid": 123, 00:17:58.149 "qid": 0, 00:17:58.149 "state": "enabled", 00:17:58.149 "thread": "nvmf_tgt_poll_group_000", 00:17:58.149 "listen_address": { 00:17:58.149 "trtype": "TCP", 00:17:58.149 "adrfam": "IPv4", 00:17:58.149 "traddr": "10.0.0.2", 00:17:58.149 "trsvcid": "4420" 00:17:58.149 }, 00:17:58.149 "peer_address": { 00:17:58.149 "trtype": "TCP", 00:17:58.149 "adrfam": "IPv4", 00:17:58.149 "traddr": "10.0.0.1", 00:17:58.149 "trsvcid": "58410" 00:17:58.149 }, 00:17:58.149 "auth": { 00:17:58.149 "state": "completed", 00:17:58.149 "digest": "sha512", 00:17:58.149 "dhgroup": "ffdhe4096" 00:17:58.149 } 00:17:58.149 } 00:17:58.149 ]' 00:17:58.149 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:58.149 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:58.149 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:58.149 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:58.149 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:58.408 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.408 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.408 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.666 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:01:MGNlYTNjM2JmNWQ1N2JiOTczMTYyNzA5YzllYWIxMTBeGuOF: --dhchap-ctrl-secret DHHC-1:02:M2Q1NTc2ZmNmZjcwMzQwNTY3ZmUxYjdmNDllOGYyMDZjNTcxMDRmYzExZjViY2My55iSBg==: 00:17:59.233 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.233 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.233 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:17:59.233 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.233 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.233 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.233 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:59.233 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:59.233 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:59.491 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:17:59.491 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:59.491 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:59.491 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:59.491 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:59.491 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.491 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.491 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.491 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.491 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.491 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.491 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.056 00:18:00.056 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:00.056 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.056 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:00.314 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.314 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.314 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.314 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.314 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.314 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:00.314 { 00:18:00.314 "cntlid": 125, 00:18:00.314 "qid": 0, 00:18:00.314 "state": "enabled", 00:18:00.314 "thread": "nvmf_tgt_poll_group_000", 00:18:00.314 "listen_address": { 00:18:00.314 "trtype": "TCP", 00:18:00.314 "adrfam": "IPv4", 00:18:00.314 "traddr": "10.0.0.2", 00:18:00.314 "trsvcid": "4420" 00:18:00.314 }, 00:18:00.314 "peer_address": { 00:18:00.314 "trtype": "TCP", 00:18:00.314 "adrfam": "IPv4", 00:18:00.314 "traddr": "10.0.0.1", 00:18:00.314 "trsvcid": "58440" 00:18:00.314 }, 00:18:00.314 "auth": { 00:18:00.314 "state": "completed", 00:18:00.314 "digest": "sha512", 00:18:00.315 "dhgroup": "ffdhe4096" 00:18:00.315 } 00:18:00.315 } 00:18:00.315 ]' 00:18:00.315 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:00.315 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:00.315 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:00.315 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:00.315 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:00.315 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.315 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.315 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.579 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:02:Mjg2MjBkZDBhNzBmNjhiNTAxZjQ2MTliMGVmODY3MGU1NzNiNjFhZTgwZWFkM2ViDAKtvg==: --dhchap-ctrl-secret DHHC-1:01:NzhiNTE1OWIzYzY4N2FiMTk3MWMyM2I2Njk5ZDAzYmFDNOZ8: 00:18:01.514 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.514 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.514 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:18:01.514 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.514 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.514 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.514 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:01.514 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:01.514 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:01.514 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:18:01.514 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:01.514 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:01.514 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:01.514 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:01.514 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.514 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key3 00:18:01.514 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.514 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.514 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.514 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:01.514 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:02.081 00:18:02.081 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:02.081 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:02.081 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.339 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.339 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.339 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.339 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.339 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.339 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:02.339 { 00:18:02.339 "cntlid": 127, 00:18:02.339 "qid": 0, 00:18:02.339 "state": "enabled", 00:18:02.339 "thread": "nvmf_tgt_poll_group_000", 00:18:02.339 "listen_address": { 00:18:02.339 "trtype": "TCP", 00:18:02.339 "adrfam": "IPv4", 00:18:02.339 "traddr": "10.0.0.2", 00:18:02.339 "trsvcid": "4420" 00:18:02.339 }, 00:18:02.339 "peer_address": { 00:18:02.339 "trtype": "TCP", 00:18:02.339 "adrfam": "IPv4", 00:18:02.339 "traddr": "10.0.0.1", 00:18:02.339 "trsvcid": "58460" 00:18:02.339 }, 00:18:02.339 "auth": { 00:18:02.339 "state": "completed", 00:18:02.339 "digest": "sha512", 00:18:02.339 "dhgroup": "ffdhe4096" 00:18:02.339 } 00:18:02.339 } 00:18:02.339 ]' 00:18:02.339 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:02.339 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:02.339 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:02.339 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:02.339 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:02.339 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.339 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.339 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.906 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:03:NDE3MWFiZDY1NjNjOGEyN2FlMmIxZTIwMjlkMGY4ZTUwZmRjNWZlMDI1OTkyYThhYzBiMTkyNjZkYzQ5NmQ2Yr2YyeE=: 00:18:03.475 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.475 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.475 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:18:03.475 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.475 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.475 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.475 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:03.475 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:03.475 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:03.475 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:03.734 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:18:03.734 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:03.734 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:03.734 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:03.734 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:03.734 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.734 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.734 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.734 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.734 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.734 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.734 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.297 00:18:04.297 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:04.297 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.297 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:04.555 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.555 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.555 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.555 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.555 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.555 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:04.555 { 00:18:04.555 "cntlid": 129, 00:18:04.555 "qid": 0, 00:18:04.555 "state": "enabled", 00:18:04.555 "thread": "nvmf_tgt_poll_group_000", 00:18:04.555 "listen_address": { 00:18:04.555 "trtype": "TCP", 00:18:04.555 "adrfam": "IPv4", 00:18:04.555 "traddr": "10.0.0.2", 00:18:04.555 "trsvcid": "4420" 00:18:04.555 }, 00:18:04.555 "peer_address": { 00:18:04.555 "trtype": "TCP", 00:18:04.555 "adrfam": "IPv4", 00:18:04.555 "traddr": "10.0.0.1", 00:18:04.555 "trsvcid": "45720" 00:18:04.555 }, 00:18:04.555 "auth": { 00:18:04.555 "state": "completed", 00:18:04.555 "digest": "sha512", 00:18:04.555 "dhgroup": "ffdhe6144" 00:18:04.555 } 00:18:04.555 } 00:18:04.555 ]' 00:18:04.555 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:04.555 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:04.555 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:04.555 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:04.555 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:04.555 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.555 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.555 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.812 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:00:ZWRkOGJkNTdmNjJjNWIxZmE1NmRkNDcwMzg0NjUxY2I5NThhYzMwNjg3ZTI1YWQ2si6sAA==: --dhchap-ctrl-secret DHHC-1:03:NTEzMGFiMWJjMWQ0YzhlMjU2OWIyNWU0MGE5YjdhODBjOTJhY2U0OTY0NDE4ODQ5ZWYyMTRmNmU2ZDU1OGYwMHskUL4=: 00:18:05.744 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.744 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.744 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:18:05.744 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.744 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.744 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.744 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:05.744 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:05.744 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:05.744 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:18:05.744 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:05.744 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:05.744 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:05.744 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:05.744 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.744 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.744 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.744 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.744 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.744 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.744 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.309 00:18:06.309 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:06.309 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.309 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:06.568 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.568 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.568 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.568 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.568 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.568 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:06.568 { 00:18:06.568 "cntlid": 131, 00:18:06.568 "qid": 0, 00:18:06.568 "state": "enabled", 00:18:06.568 "thread": "nvmf_tgt_poll_group_000", 00:18:06.568 "listen_address": { 00:18:06.568 "trtype": "TCP", 00:18:06.568 "adrfam": "IPv4", 00:18:06.568 "traddr": "10.0.0.2", 00:18:06.568 "trsvcid": "4420" 00:18:06.568 }, 00:18:06.568 "peer_address": { 00:18:06.568 "trtype": "TCP", 00:18:06.568 "adrfam": "IPv4", 00:18:06.568 "traddr": "10.0.0.1", 00:18:06.568 "trsvcid": "45734" 00:18:06.568 }, 00:18:06.568 "auth": { 00:18:06.568 "state": "completed", 00:18:06.568 "digest": "sha512", 00:18:06.568 "dhgroup": "ffdhe6144" 00:18:06.568 } 00:18:06.568 } 00:18:06.568 ]' 00:18:06.568 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:06.568 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:06.568 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:06.568 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:06.568 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:06.827 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.827 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.827 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.085 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:01:MGNlYTNjM2JmNWQ1N2JiOTczMTYyNzA5YzllYWIxMTBeGuOF: --dhchap-ctrl-secret DHHC-1:02:M2Q1NTc2ZmNmZjcwMzQwNTY3ZmUxYjdmNDllOGYyMDZjNTcxMDRmYzExZjViY2My55iSBg==: 00:18:07.652 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.652 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.653 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:18:07.653 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.653 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.653 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.653 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:07.653 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:07.653 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:07.911 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:18:07.911 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:07.911 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:07.911 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:07.911 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:07.911 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.911 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.911 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.911 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.911 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.911 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.911 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.476 00:18:08.476 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:08.476 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.476 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:08.734 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.735 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.735 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.735 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.735 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.735 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:08.735 { 00:18:08.735 "cntlid": 133, 00:18:08.735 "qid": 0, 00:18:08.735 "state": "enabled", 00:18:08.735 "thread": "nvmf_tgt_poll_group_000", 00:18:08.735 "listen_address": { 00:18:08.735 "trtype": "TCP", 00:18:08.735 "adrfam": "IPv4", 00:18:08.735 "traddr": "10.0.0.2", 00:18:08.735 "trsvcid": "4420" 00:18:08.735 }, 00:18:08.735 "peer_address": { 00:18:08.735 "trtype": "TCP", 00:18:08.735 "adrfam": "IPv4", 00:18:08.735 "traddr": "10.0.0.1", 00:18:08.735 "trsvcid": "45772" 00:18:08.735 }, 00:18:08.735 "auth": { 00:18:08.735 "state": "completed", 00:18:08.735 "digest": "sha512", 00:18:08.735 "dhgroup": "ffdhe6144" 00:18:08.735 } 00:18:08.735 } 00:18:08.735 ]' 00:18:08.735 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:08.735 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:08.735 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:08.993 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:08.993 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:08.993 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.993 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.993 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.249 18:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:02:Mjg2MjBkZDBhNzBmNjhiNTAxZjQ2MTliMGVmODY3MGU1NzNiNjFhZTgwZWFkM2ViDAKtvg==: --dhchap-ctrl-secret DHHC-1:01:NzhiNTE1OWIzYzY4N2FiMTk3MWMyM2I2Njk5ZDAzYmFDNOZ8: 00:18:09.814 18:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.814 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.814 18:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:18:09.814 18:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.814 18:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.814 18:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.814 18:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:09.814 18:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:09.814 18:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:10.075 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:18:10.075 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:10.075 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:10.075 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:10.075 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:10.075 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.075 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key3 00:18:10.075 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.075 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.075 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.075 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:10.075 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:10.649 00:18:10.649 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:10.649 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.649 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:10.907 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.907 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.907 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.907 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.907 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.907 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:10.907 { 00:18:10.907 "cntlid": 135, 00:18:10.907 "qid": 0, 00:18:10.907 "state": "enabled", 00:18:10.907 "thread": "nvmf_tgt_poll_group_000", 00:18:10.907 "listen_address": { 00:18:10.907 "trtype": "TCP", 00:18:10.907 "adrfam": "IPv4", 00:18:10.907 "traddr": "10.0.0.2", 00:18:10.907 "trsvcid": "4420" 00:18:10.907 }, 00:18:10.907 "peer_address": { 00:18:10.907 "trtype": "TCP", 00:18:10.907 "adrfam": "IPv4", 00:18:10.907 "traddr": "10.0.0.1", 00:18:10.907 "trsvcid": "45792" 00:18:10.907 }, 00:18:10.907 "auth": { 00:18:10.907 "state": "completed", 00:18:10.907 "digest": "sha512", 00:18:10.907 "dhgroup": "ffdhe6144" 00:18:10.907 } 00:18:10.907 } 00:18:10.907 ]' 00:18:10.907 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:10.907 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:10.907 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:10.907 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:10.907 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:10.907 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.907 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.907 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.473 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:03:NDE3MWFiZDY1NjNjOGEyN2FlMmIxZTIwMjlkMGY4ZTUwZmRjNWZlMDI1OTkyYThhYzBiMTkyNjZkYzQ5NmQ2Yr2YyeE=: 00:18:12.040 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.040 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.040 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:18:12.040 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.040 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.040 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.040 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:12.040 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:12.040 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:12.040 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:12.298 18:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:18:12.298 18:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:12.298 18:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:12.298 18:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:12.298 18:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:12.298 18:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.298 18:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.298 18:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.298 18:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.298 18:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.298 18:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.298 18:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.866 00:18:12.866 18:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:12.866 18:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.866 18:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:13.124 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.124 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.124 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.124 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.124 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.124 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:13.124 { 00:18:13.124 "cntlid": 137, 00:18:13.124 "qid": 0, 00:18:13.124 "state": "enabled", 00:18:13.124 "thread": "nvmf_tgt_poll_group_000", 00:18:13.124 "listen_address": { 00:18:13.124 "trtype": "TCP", 00:18:13.124 "adrfam": "IPv4", 00:18:13.124 "traddr": "10.0.0.2", 00:18:13.124 "trsvcid": "4420" 00:18:13.124 }, 00:18:13.124 "peer_address": { 00:18:13.124 "trtype": "TCP", 00:18:13.124 "adrfam": "IPv4", 00:18:13.124 "traddr": "10.0.0.1", 00:18:13.124 "trsvcid": "45822" 00:18:13.124 }, 00:18:13.124 "auth": { 00:18:13.124 "state": "completed", 00:18:13.124 "digest": "sha512", 00:18:13.124 "dhgroup": "ffdhe8192" 00:18:13.124 } 00:18:13.124 } 00:18:13.124 ]' 00:18:13.124 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:13.124 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:13.124 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:13.382 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:13.382 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:13.382 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.382 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.382 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.640 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:00:ZWRkOGJkNTdmNjJjNWIxZmE1NmRkNDcwMzg0NjUxY2I5NThhYzMwNjg3ZTI1YWQ2si6sAA==: --dhchap-ctrl-secret DHHC-1:03:NTEzMGFiMWJjMWQ0YzhlMjU2OWIyNWU0MGE5YjdhODBjOTJhY2U0OTY0NDE4ODQ5ZWYyMTRmNmU2ZDU1OGYwMHskUL4=: 00:18:14.206 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.206 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.206 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:18:14.206 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.206 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.206 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.206 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:14.206 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:14.206 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:14.464 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:18:14.464 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:14.464 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:14.464 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:14.464 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:14.464 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.464 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.464 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.464 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.721 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.722 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.722 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.323 00:18:15.323 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:15.323 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:15.323 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.582 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.582 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.582 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.582 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.582 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.582 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:15.582 { 00:18:15.582 "cntlid": 139, 00:18:15.582 "qid": 0, 00:18:15.582 "state": "enabled", 00:18:15.582 "thread": "nvmf_tgt_poll_group_000", 00:18:15.582 "listen_address": { 00:18:15.582 "trtype": "TCP", 00:18:15.582 "adrfam": "IPv4", 00:18:15.582 "traddr": "10.0.0.2", 00:18:15.582 "trsvcid": "4420" 00:18:15.582 }, 00:18:15.582 "peer_address": { 00:18:15.582 "trtype": "TCP", 00:18:15.582 "adrfam": "IPv4", 00:18:15.582 "traddr": "10.0.0.1", 00:18:15.582 "trsvcid": "40524" 00:18:15.582 }, 00:18:15.582 "auth": { 00:18:15.582 "state": "completed", 00:18:15.582 "digest": "sha512", 00:18:15.582 "dhgroup": "ffdhe8192" 00:18:15.582 } 00:18:15.582 } 00:18:15.582 ]' 00:18:15.582 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:15.582 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:15.582 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:15.582 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:15.582 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:15.582 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.582 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.582 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.147 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:01:MGNlYTNjM2JmNWQ1N2JiOTczMTYyNzA5YzllYWIxMTBeGuOF: --dhchap-ctrl-secret DHHC-1:02:M2Q1NTc2ZmNmZjcwMzQwNTY3ZmUxYjdmNDllOGYyMDZjNTcxMDRmYzExZjViY2My55iSBg==: 00:18:16.712 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.712 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.712 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:18:16.712 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.712 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.712 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.712 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:16.712 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:16.712 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:16.970 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:18:16.970 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:16.970 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:16.970 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:16.970 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:16.970 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.970 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:16.970 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.970 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.970 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.970 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:16.970 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.536 00:18:17.536 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:17.536 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.536 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:17.794 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.794 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.794 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.794 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.053 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.053 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:18.053 { 00:18:18.053 "cntlid": 141, 00:18:18.053 "qid": 0, 00:18:18.053 "state": "enabled", 00:18:18.053 "thread": "nvmf_tgt_poll_group_000", 00:18:18.053 "listen_address": { 00:18:18.053 "trtype": "TCP", 00:18:18.053 "adrfam": "IPv4", 00:18:18.053 "traddr": "10.0.0.2", 00:18:18.053 "trsvcid": "4420" 00:18:18.053 }, 00:18:18.053 "peer_address": { 00:18:18.053 "trtype": "TCP", 00:18:18.053 "adrfam": "IPv4", 00:18:18.053 "traddr": "10.0.0.1", 00:18:18.053 "trsvcid": "40548" 00:18:18.053 }, 00:18:18.053 "auth": { 00:18:18.053 "state": "completed", 00:18:18.053 "digest": "sha512", 00:18:18.053 "dhgroup": "ffdhe8192" 00:18:18.053 } 00:18:18.053 } 00:18:18.053 ]' 00:18:18.053 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:18.053 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:18.053 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:18.053 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:18.053 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:18.053 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.053 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.053 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.310 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:02:Mjg2MjBkZDBhNzBmNjhiNTAxZjQ2MTliMGVmODY3MGU1NzNiNjFhZTgwZWFkM2ViDAKtvg==: --dhchap-ctrl-secret DHHC-1:01:NzhiNTE1OWIzYzY4N2FiMTk3MWMyM2I2Njk5ZDAzYmFDNOZ8: 00:18:19.243 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.243 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.243 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:18:19.243 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.243 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.243 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.243 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:19.243 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:19.243 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:19.243 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:18:19.243 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:19.244 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:19.244 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:19.244 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:19.244 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.244 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key3 00:18:19.244 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.244 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.244 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.244 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:19.244 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:19.811 00:18:19.811 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:19.811 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.811 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:20.377 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.377 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.377 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.377 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.377 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.377 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:20.377 { 00:18:20.377 "cntlid": 143, 00:18:20.377 "qid": 0, 00:18:20.377 "state": "enabled", 00:18:20.377 "thread": "nvmf_tgt_poll_group_000", 00:18:20.377 "listen_address": { 00:18:20.377 "trtype": "TCP", 00:18:20.377 "adrfam": "IPv4", 00:18:20.377 "traddr": "10.0.0.2", 00:18:20.377 "trsvcid": "4420" 00:18:20.377 }, 00:18:20.377 "peer_address": { 00:18:20.377 "trtype": "TCP", 00:18:20.377 "adrfam": "IPv4", 00:18:20.377 "traddr": "10.0.0.1", 00:18:20.377 "trsvcid": "40566" 00:18:20.377 }, 00:18:20.377 "auth": { 00:18:20.377 "state": "completed", 00:18:20.377 "digest": "sha512", 00:18:20.377 "dhgroup": "ffdhe8192" 00:18:20.377 } 00:18:20.377 } 00:18:20.377 ]' 00:18:20.377 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:20.377 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:20.377 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:20.377 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:20.377 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:20.377 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.377 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.377 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.636 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:03:NDE3MWFiZDY1NjNjOGEyN2FlMmIxZTIwMjlkMGY4ZTUwZmRjNWZlMDI1OTkyYThhYzBiMTkyNjZkYzQ5NmQ2Yr2YyeE=: 00:18:21.570 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.570 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.570 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:18:21.570 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.570 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.570 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.570 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:18:21.570 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:18:21.570 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:18:21.570 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:21.570 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:21.570 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:21.570 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:18:21.570 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:21.570 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:21.570 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:21.570 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:21.570 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.570 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.571 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.571 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.571 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.571 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.571 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.505 00:18:22.505 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:22.505 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:22.505 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.505 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.505 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.505 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.505 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.505 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.505 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:22.505 { 00:18:22.505 "cntlid": 145, 00:18:22.505 "qid": 0, 00:18:22.505 "state": "enabled", 00:18:22.505 "thread": "nvmf_tgt_poll_group_000", 00:18:22.505 "listen_address": { 00:18:22.505 "trtype": "TCP", 00:18:22.505 "adrfam": "IPv4", 00:18:22.505 "traddr": "10.0.0.2", 00:18:22.505 "trsvcid": "4420" 00:18:22.505 }, 00:18:22.506 "peer_address": { 00:18:22.506 "trtype": "TCP", 00:18:22.506 "adrfam": "IPv4", 00:18:22.506 "traddr": "10.0.0.1", 00:18:22.506 "trsvcid": "40586" 00:18:22.506 }, 00:18:22.506 "auth": { 00:18:22.506 "state": "completed", 00:18:22.506 "digest": "sha512", 00:18:22.506 "dhgroup": "ffdhe8192" 00:18:22.506 } 00:18:22.506 } 00:18:22.506 ]' 00:18:22.506 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:22.506 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:22.506 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:22.764 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:22.764 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:22.764 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.764 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.764 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.022 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:00:ZWRkOGJkNTdmNjJjNWIxZmE1NmRkNDcwMzg0NjUxY2I5NThhYzMwNjg3ZTI1YWQ2si6sAA==: --dhchap-ctrl-secret DHHC-1:03:NTEzMGFiMWJjMWQ0YzhlMjU2OWIyNWU0MGE5YjdhODBjOTJhY2U0OTY0NDE4ODQ5ZWYyMTRmNmU2ZDU1OGYwMHskUL4=: 00:18:23.957 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.957 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.957 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:18:23.957 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.957 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.957 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.957 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key1 00:18:23.957 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.957 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.957 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.957 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:23.957 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:23.957 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:23.957 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:23.957 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:23.957 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:23.957 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:23.957 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:23.957 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:24.524 request: 00:18:24.524 { 00:18:24.524 "name": "nvme0", 00:18:24.524 "trtype": "tcp", 00:18:24.524 "traddr": "10.0.0.2", 00:18:24.524 "adrfam": "ipv4", 00:18:24.524 "trsvcid": "4420", 00:18:24.524 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:24.524 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96", 00:18:24.524 "prchk_reftag": false, 00:18:24.524 "prchk_guard": false, 00:18:24.524 "hdgst": false, 00:18:24.524 "ddgst": false, 00:18:24.524 "dhchap_key": "key2", 00:18:24.524 "method": "bdev_nvme_attach_controller", 00:18:24.524 "req_id": 1 00:18:24.524 } 00:18:24.524 Got JSON-RPC error response 00:18:24.524 response: 00:18:24.524 { 00:18:24.524 "code": -5, 00:18:24.524 "message": "Input/output error" 00:18:24.524 } 00:18:24.524 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:24.524 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:24.524 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:24.524 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:24.524 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:18:24.524 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.524 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.524 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.524 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.524 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.524 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.524 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.524 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:24.525 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:24.525 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:24.525 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:24.525 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:24.525 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:24.525 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:24.525 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:24.525 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:25.093 request: 00:18:25.093 { 00:18:25.093 "name": "nvme0", 00:18:25.093 "trtype": "tcp", 00:18:25.093 "traddr": "10.0.0.2", 00:18:25.093 "adrfam": "ipv4", 00:18:25.093 "trsvcid": "4420", 00:18:25.093 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:25.093 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96", 00:18:25.093 "prchk_reftag": false, 00:18:25.093 "prchk_guard": false, 00:18:25.093 "hdgst": false, 00:18:25.093 "ddgst": false, 00:18:25.093 "dhchap_key": "key1", 00:18:25.093 "dhchap_ctrlr_key": "ckey2", 00:18:25.093 "method": "bdev_nvme_attach_controller", 00:18:25.093 "req_id": 1 00:18:25.093 } 00:18:25.093 Got JSON-RPC error response 00:18:25.093 response: 00:18:25.093 { 00:18:25.093 "code": -5, 00:18:25.093 "message": "Input/output error" 00:18:25.093 } 00:18:25.093 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:25.093 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:25.093 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:25.093 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:25.093 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:18:25.093 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.093 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.093 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.093 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key1 00:18:25.093 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.093 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.093 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.093 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.093 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:25.093 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.093 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:25.093 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:25.093 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:25.093 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:25.093 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.093 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.661 request: 00:18:25.661 { 00:18:25.661 "name": "nvme0", 00:18:25.661 "trtype": "tcp", 00:18:25.661 "traddr": "10.0.0.2", 00:18:25.661 "adrfam": "ipv4", 00:18:25.661 "trsvcid": "4420", 00:18:25.661 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:25.661 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96", 00:18:25.661 "prchk_reftag": false, 00:18:25.661 "prchk_guard": false, 00:18:25.661 "hdgst": false, 00:18:25.661 "ddgst": false, 00:18:25.661 "dhchap_key": "key1", 00:18:25.661 "dhchap_ctrlr_key": "ckey1", 00:18:25.661 "method": "bdev_nvme_attach_controller", 00:18:25.661 "req_id": 1 00:18:25.661 } 00:18:25.661 Got JSON-RPC error response 00:18:25.661 response: 00:18:25.661 { 00:18:25.661 "code": -5, 00:18:25.661 "message": "Input/output error" 00:18:25.661 } 00:18:25.661 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:25.661 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:25.661 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:25.661 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:25.661 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:18:25.661 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.661 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.661 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.661 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 72990 00:18:25.661 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 72990 ']' 00:18:25.661 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 72990 00:18:25.661 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:18:25.661 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:25.661 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72990 00:18:25.661 killing process with pid 72990 00:18:25.661 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:25.661 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:25.661 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72990' 00:18:25.661 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 72990 00:18:25.661 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 72990 00:18:27.037 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:27.037 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:27.037 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:27.037 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.037 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=76037 00:18:27.037 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:27.037 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 76037 00:18:27.037 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 76037 ']' 00:18:27.037 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:27.037 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:27.037 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:27.037 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:27.037 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.998 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:27.998 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:27.998 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:27.998 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:27.998 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.998 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:27.998 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:27.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:27.998 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 76037 00:18:27.998 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 76037 ']' 00:18:27.998 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:27.999 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:27.999 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:27.999 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:27.999 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.256 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:28.256 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:28.256 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:18:28.256 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.256 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.514 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.514 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:18:28.514 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:28.514 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:28.514 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:28.514 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:28.514 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.514 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key3 00:18:28.514 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.514 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.514 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.515 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:28.515 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:29.449 00:18:29.449 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:29.449 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:29.449 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.449 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.449 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.449 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.449 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.449 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.449 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:29.449 { 00:18:29.449 "cntlid": 1, 00:18:29.449 "qid": 0, 00:18:29.449 "state": "enabled", 00:18:29.449 "thread": "nvmf_tgt_poll_group_000", 00:18:29.449 "listen_address": { 00:18:29.449 "trtype": "TCP", 00:18:29.449 "adrfam": "IPv4", 00:18:29.449 "traddr": "10.0.0.2", 00:18:29.449 "trsvcid": "4420" 00:18:29.449 }, 00:18:29.449 "peer_address": { 00:18:29.449 "trtype": "TCP", 00:18:29.449 "adrfam": "IPv4", 00:18:29.449 "traddr": "10.0.0.1", 00:18:29.449 "trsvcid": "36888" 00:18:29.449 }, 00:18:29.449 "auth": { 00:18:29.449 "state": "completed", 00:18:29.449 "digest": "sha512", 00:18:29.449 "dhgroup": "ffdhe8192" 00:18:29.449 } 00:18:29.449 } 00:18:29.449 ]' 00:18:29.449 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:29.737 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:29.737 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:29.737 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:29.737 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:29.737 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.737 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.737 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.995 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid 1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-secret DHHC-1:03:NDE3MWFiZDY1NjNjOGEyN2FlMmIxZTIwMjlkMGY4ZTUwZmRjNWZlMDI1OTkyYThhYzBiMTkyNjZkYzQ5NmQ2Yr2YyeE=: 00:18:30.560 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.560 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.560 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:18:30.560 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.560 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.560 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.560 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --dhchap-key key3 00:18:30.560 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.560 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.817 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.817 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:30.817 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:31.075 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:31.075 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:31.075 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:31.075 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:31.075 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:31.075 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:31.075 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:31.075 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:31.075 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:31.332 request: 00:18:31.332 { 00:18:31.332 "name": "nvme0", 00:18:31.332 "trtype": "tcp", 00:18:31.332 "traddr": "10.0.0.2", 00:18:31.332 "adrfam": "ipv4", 00:18:31.332 "trsvcid": "4420", 00:18:31.332 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:31.332 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96", 00:18:31.332 "prchk_reftag": false, 00:18:31.332 "prchk_guard": false, 00:18:31.332 "hdgst": false, 00:18:31.332 "ddgst": false, 00:18:31.332 "dhchap_key": "key3", 00:18:31.332 "method": "bdev_nvme_attach_controller", 00:18:31.332 "req_id": 1 00:18:31.332 } 00:18:31.332 Got JSON-RPC error response 00:18:31.332 response: 00:18:31.332 { 00:18:31.332 "code": -5, 00:18:31.332 "message": "Input/output error" 00:18:31.332 } 00:18:31.332 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:31.332 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:31.332 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:31.332 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:31.332 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:18:31.332 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:18:31.332 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:31.332 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:31.590 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:31.590 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:31.590 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:31.590 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:31.590 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:31.590 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:31.590 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:31.590 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:31.590 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:31.895 request: 00:18:31.895 { 00:18:31.895 "name": "nvme0", 00:18:31.895 "trtype": "tcp", 00:18:31.895 "traddr": "10.0.0.2", 00:18:31.895 "adrfam": "ipv4", 00:18:31.895 "trsvcid": "4420", 00:18:31.895 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:31.895 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96", 00:18:31.895 "prchk_reftag": false, 00:18:31.895 "prchk_guard": false, 00:18:31.895 "hdgst": false, 00:18:31.895 "ddgst": false, 00:18:31.895 "dhchap_key": "key3", 00:18:31.895 "method": "bdev_nvme_attach_controller", 00:18:31.895 "req_id": 1 00:18:31.895 } 00:18:31.895 Got JSON-RPC error response 00:18:31.895 response: 00:18:31.895 { 00:18:31.895 "code": -5, 00:18:31.895 "message": "Input/output error" 00:18:31.895 } 00:18:31.895 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:31.895 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:31.895 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:31.895 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:31.895 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:18:31.895 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:18:31.895 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:18:31.895 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:31.895 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:31.895 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:32.154 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:18:32.154 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.154 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.154 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.154 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:18:32.154 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.154 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.154 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.154 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:32.154 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:32.154 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:32.154 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:32.154 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:32.154 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:32.154 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:32.154 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:32.154 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:32.412 request: 00:18:32.412 { 00:18:32.412 "name": "nvme0", 00:18:32.412 "trtype": "tcp", 00:18:32.412 "traddr": "10.0.0.2", 00:18:32.412 "adrfam": "ipv4", 00:18:32.412 "trsvcid": "4420", 00:18:32.412 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:32.412 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96", 00:18:32.412 "prchk_reftag": false, 00:18:32.412 "prchk_guard": false, 00:18:32.412 "hdgst": false, 00:18:32.412 "ddgst": false, 00:18:32.412 "dhchap_key": "key0", 00:18:32.412 "dhchap_ctrlr_key": "key1", 00:18:32.412 "method": "bdev_nvme_attach_controller", 00:18:32.412 "req_id": 1 00:18:32.412 } 00:18:32.412 Got JSON-RPC error response 00:18:32.412 response: 00:18:32.412 { 00:18:32.412 "code": -5, 00:18:32.412 "message": "Input/output error" 00:18:32.412 } 00:18:32.412 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:32.412 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:32.412 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:32.413 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:32.413 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:32.413 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:32.671 00:18:32.671 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:18:32.671 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.671 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:18:32.928 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.928 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.928 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.186 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:18:33.186 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:18:33.186 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 73022 00:18:33.186 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 73022 ']' 00:18:33.186 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 73022 00:18:33.186 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:18:33.186 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:33.186 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73022 00:18:33.186 killing process with pid 73022 00:18:33.186 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:33.186 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:33.186 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73022' 00:18:33.186 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 73022 00:18:33.186 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 73022 00:18:35.718 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:35.718 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:35.718 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:18:35.718 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:35.718 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:18:35.718 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:35.718 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:35.718 rmmod nvme_tcp 00:18:35.718 rmmod nvme_fabrics 00:18:35.718 rmmod nvme_keyring 00:18:35.718 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:35.718 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:18:35.718 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:18:35.718 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 76037 ']' 00:18:35.718 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 76037 00:18:35.718 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 76037 ']' 00:18:35.718 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 76037 00:18:35.718 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:18:35.718 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:35.718 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76037 00:18:35.718 killing process with pid 76037 00:18:35.718 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:35.718 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:35.718 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76037' 00:18:35.718 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 76037 00:18:35.718 18:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 76037 00:18:37.100 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.1zh /tmp/spdk.key-sha256.Qo5 /tmp/spdk.key-sha384.oOw /tmp/spdk.key-sha512.IJ9 /tmp/spdk.key-sha512.xfB /tmp/spdk.key-sha384.o0I /tmp/spdk.key-sha256.4W9 '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:18:37.101 ************************************ 00:18:37.101 END TEST nvmf_auth_target 00:18:37.101 ************************************ 00:18:37.101 00:18:37.101 real 2m56.480s 00:18:37.101 user 6m59.424s 00:18:37.101 sys 0m26.317s 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:37.101 ************************************ 00:18:37.101 START TEST nvmf_bdevio_no_huge 00:18:37.101 ************************************ 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:37.101 * Looking for test storage... 00:18:37.101 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=1e224894-a0fc-4112-b81b-a37606f50c96 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:37.101 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:37.102 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:37.102 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:37.102 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:37.102 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:37.102 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:37.102 Cannot find device "nvmf_tgt_br" 00:18:37.102 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:18:37.102 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:37.102 Cannot find device "nvmf_tgt_br2" 00:18:37.102 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:18:37.102 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:37.102 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:37.102 Cannot find device "nvmf_tgt_br" 00:18:37.102 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:18:37.102 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:37.102 Cannot find device "nvmf_tgt_br2" 00:18:37.102 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:18:37.102 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:37.102 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:37.102 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:37.102 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:37.102 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:18:37.102 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:37.102 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:37.102 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:18:37.102 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:37.360 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:37.360 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:37.360 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:37.360 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:37.360 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:37.360 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:37.360 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:37.360 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:37.360 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:37.360 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:37.361 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:37.361 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:37.361 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:37.361 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:37.361 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:37.361 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:37.361 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:37.361 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:37.361 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:37.361 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:37.361 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:37.361 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:37.361 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:37.361 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:37.361 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:18:37.361 00:18:37.361 --- 10.0.0.2 ping statistics --- 00:18:37.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:37.361 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:18:37.361 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:37.361 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:37.361 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:18:37.361 00:18:37.361 --- 10.0.0.3 ping statistics --- 00:18:37.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:37.361 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:18:37.361 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:37.361 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:37.361 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:18:37.361 00:18:37.361 --- 10.0.0.1 ping statistics --- 00:18:37.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:37.361 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:18:37.361 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:37.361 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:18:37.361 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:37.361 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:37.361 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:37.361 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:37.361 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:37.361 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:37.361 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:37.361 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:37.361 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:37.361 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:37.361 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:37.361 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=76393 00:18:37.361 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 76393 00:18:37.361 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 76393 ']' 00:18:37.361 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.361 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:37.361 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:37.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:37.361 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:37.361 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:37.361 18:25:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:37.619 [2024-07-22 18:25:49.469263] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:37.619 [2024-07-22 18:25:49.469435] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:37.877 [2024-07-22 18:25:49.678301] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:38.135 [2024-07-22 18:25:49.943702] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:38.135 [2024-07-22 18:25:49.943769] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:38.135 [2024-07-22 18:25:49.943802] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:38.135 [2024-07-22 18:25:49.943821] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:38.135 [2024-07-22 18:25:49.943843] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:38.135 [2024-07-22 18:25:49.944075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:18:38.135 [2024-07-22 18:25:49.944816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:18:38.135 [2024-07-22 18:25:49.944872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:18:38.135 [2024-07-22 18:25:49.944872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:38.135 [2024-07-22 18:25:50.137362] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:38.393 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:38.393 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:18:38.393 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:38.393 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:38.393 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:38.393 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:38.393 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:38.393 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.393 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:38.393 [2024-07-22 18:25:50.384655] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:38.393 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.393 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:38.394 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.394 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:38.652 Malloc0 00:18:38.652 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.652 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:38.652 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.652 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:38.652 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.652 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:38.652 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.652 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:38.652 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.652 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:38.652 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.652 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:38.652 [2024-07-22 18:25:50.479631] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:38.652 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.652 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:38.652 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:38.652 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:18:38.652 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:18:38.652 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:38.652 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:38.652 { 00:18:38.652 "params": { 00:18:38.652 "name": "Nvme$subsystem", 00:18:38.652 "trtype": "$TEST_TRANSPORT", 00:18:38.652 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:38.652 "adrfam": "ipv4", 00:18:38.652 "trsvcid": "$NVMF_PORT", 00:18:38.652 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:38.652 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:38.652 "hdgst": ${hdgst:-false}, 00:18:38.652 "ddgst": ${ddgst:-false} 00:18:38.652 }, 00:18:38.652 "method": "bdev_nvme_attach_controller" 00:18:38.652 } 00:18:38.652 EOF 00:18:38.652 )") 00:18:38.652 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:18:38.652 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:18:38.652 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:18:38.652 18:25:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:38.652 "params": { 00:18:38.652 "name": "Nvme1", 00:18:38.652 "trtype": "tcp", 00:18:38.652 "traddr": "10.0.0.2", 00:18:38.652 "adrfam": "ipv4", 00:18:38.652 "trsvcid": "4420", 00:18:38.652 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:38.652 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:38.652 "hdgst": false, 00:18:38.652 "ddgst": false 00:18:38.652 }, 00:18:38.652 "method": "bdev_nvme_attach_controller" 00:18:38.652 }' 00:18:38.652 [2024-07-22 18:25:50.581506] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:38.652 [2024-07-22 18:25:50.581893] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid76429 ] 00:18:38.911 [2024-07-22 18:25:50.767111] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:39.168 [2024-07-22 18:25:51.027295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:39.168 [2024-07-22 18:25:51.027463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:39.168 [2024-07-22 18:25:51.027614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.425 [2024-07-22 18:25:51.189359] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:39.425 I/O targets: 00:18:39.425 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:39.425 00:18:39.425 00:18:39.425 CUnit - A unit testing framework for C - Version 2.1-3 00:18:39.425 http://cunit.sourceforge.net/ 00:18:39.425 00:18:39.425 00:18:39.425 Suite: bdevio tests on: Nvme1n1 00:18:39.425 Test: blockdev write read block ...passed 00:18:39.425 Test: blockdev write zeroes read block ...passed 00:18:39.425 Test: blockdev write zeroes read no split ...passed 00:18:39.683 Test: blockdev write zeroes read split ...passed 00:18:39.683 Test: blockdev write zeroes read split partial ...passed 00:18:39.683 Test: blockdev reset ...[2024-07-22 18:25:51.496565] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:39.683 [2024-07-22 18:25:51.496986] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000029c00 (9): Bad file descriptor 00:18:39.683 [2024-07-22 18:25:51.516820] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:39.683 passed 00:18:39.683 Test: blockdev write read 8 blocks ...passed 00:18:39.683 Test: blockdev write read size > 128k ...passed 00:18:39.683 Test: blockdev write read invalid size ...passed 00:18:39.683 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:39.683 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:39.683 Test: blockdev write read max offset ...passed 00:18:39.683 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:39.683 Test: blockdev writev readv 8 blocks ...passed 00:18:39.683 Test: blockdev writev readv 30 x 1block ...passed 00:18:39.683 Test: blockdev writev readv block ...passed 00:18:39.683 Test: blockdev writev readv size > 128k ...passed 00:18:39.683 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:39.683 Test: blockdev comparev and writev ...[2024-07-22 18:25:51.531899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:39.683 [2024-07-22 18:25:51.531962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.683 [2024-07-22 18:25:51.531996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:39.683 [2024-07-22 18:25:51.532018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.683 [2024-07-22 18:25:51.532433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:39.683 [2024-07-22 18:25:51.532481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:39.683 [2024-07-22 18:25:51.532512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:39.683 [2024-07-22 18:25:51.532532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:39.683 [2024-07-22 18:25:51.532914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:39.683 [2024-07-22 18:25:51.532951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:39.683 [2024-07-22 18:25:51.532979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:39.683 [2024-07-22 18:25:51.533013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:39.683 [2024-07-22 18:25:51.533416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:39.683 [2024-07-22 18:25:51.533452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:39.683 [2024-07-22 18:25:51.533481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:39.683 [2024-07-22 18:25:51.533500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:39.683 passed 00:18:39.683 Test: blockdev nvme passthru rw ...passed 00:18:39.683 Test: blockdev nvme passthru vendor specific ...passed 00:18:39.683 Test: blockdev nvme admin passthru ...[2024-07-22 18:25:51.534650] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:39.683 [2024-07-22 18:25:51.534708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:39.683 [2024-07-22 18:25:51.534860] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:39.683 [2024-07-22 18:25:51.534896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:39.683 [2024-07-22 18:25:51.535042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:39.683 [2024-07-22 18:25:51.535078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:39.683 [2024-07-22 18:25:51.535227] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:39.683 [2024-07-22 18:25:51.535262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:39.683 passed 00:18:39.683 Test: blockdev copy ...passed 00:18:39.683 00:18:39.683 Run Summary: Type Total Ran Passed Failed Inactive 00:18:39.683 suites 1 1 n/a 0 0 00:18:39.683 tests 23 23 23 0 0 00:18:39.683 asserts 152 152 152 0 n/a 00:18:39.683 00:18:39.683 Elapsed time = 0.283 seconds 00:18:40.623 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:40.623 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.623 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:40.623 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.623 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:40.623 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:40.623 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:40.623 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:18:40.623 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:40.623 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:18:40.623 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:40.623 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:40.623 rmmod nvme_tcp 00:18:40.623 rmmod nvme_fabrics 00:18:40.623 rmmod nvme_keyring 00:18:40.623 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:40.623 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:18:40.623 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:18:40.623 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 76393 ']' 00:18:40.623 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 76393 00:18:40.623 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 76393 ']' 00:18:40.623 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 76393 00:18:40.623 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:18:40.623 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:40.623 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76393 00:18:40.623 killing process with pid 76393 00:18:40.623 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:18:40.623 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:18:40.623 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76393' 00:18:40.623 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 76393 00:18:40.623 18:25:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 76393 00:18:41.574 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:41.574 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:41.574 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:41.574 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:41.574 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:41.574 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:41.574 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:41.574 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:41.574 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:41.574 ************************************ 00:18:41.574 END TEST nvmf_bdevio_no_huge 00:18:41.574 ************************************ 00:18:41.574 00:18:41.574 real 0m4.481s 00:18:41.574 user 0m15.623s 00:18:41.574 sys 0m1.467s 00:18:41.574 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:41.574 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:41.574 18:25:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:18:41.574 18:25:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:41.574 18:25:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:41.574 18:25:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:41.574 18:25:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:41.574 ************************************ 00:18:41.574 START TEST nvmf_tls 00:18:41.574 ************************************ 00:18:41.574 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:41.574 * Looking for test storage... 00:18:41.574 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=1e224894-a0fc-4112-b81b-a37606f50c96 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:41.575 Cannot find device "nvmf_tgt_br" 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # true 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:41.575 Cannot find device "nvmf_tgt_br2" 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # true 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:41.575 Cannot find device "nvmf_tgt_br" 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # true 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:41.575 Cannot find device "nvmf_tgt_br2" 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # true 00:18:41.575 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:41.833 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:41.833 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:41.833 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:41.833 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:18:41.833 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:41.833 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:41.833 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:18:41.833 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:41.833 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:41.833 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:41.833 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:41.833 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:41.833 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:41.833 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:41.834 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:41.834 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:41.834 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:41.834 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:41.834 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:41.834 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:41.834 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:41.834 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:41.834 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:41.834 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:41.834 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:41.834 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:41.834 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:41.834 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:41.834 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:41.834 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:41.834 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:41.834 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:41.834 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:18:41.834 00:18:41.834 --- 10.0.0.2 ping statistics --- 00:18:41.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:41.834 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:18:41.834 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:41.834 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:41.834 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:18:41.834 00:18:41.834 --- 10.0.0.3 ping statistics --- 00:18:41.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:41.834 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:18:41.834 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:41.834 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:41.834 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:18:41.834 00:18:41.834 --- 10.0.0.1 ping statistics --- 00:18:41.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:41.834 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:18:41.834 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:41.834 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:18:41.834 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:41.834 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:41.834 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:41.834 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:41.834 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:41.834 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:41.834 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:41.834 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:41.834 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:41.834 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:41.834 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:41.834 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=76653 00:18:41.834 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:41.834 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 76653 00:18:41.834 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 76653 ']' 00:18:41.834 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:41.834 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:41.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:41.834 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:41.834 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:41.834 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:42.092 [2024-07-22 18:25:53.962005] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:42.092 [2024-07-22 18:25:53.962186] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:42.350 [2024-07-22 18:25:54.140451] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.608 [2024-07-22 18:25:54.433121] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:42.608 [2024-07-22 18:25:54.433229] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:42.609 [2024-07-22 18:25:54.433252] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:42.609 [2024-07-22 18:25:54.433270] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:42.609 [2024-07-22 18:25:54.433285] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:42.609 [2024-07-22 18:25:54.433344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:43.176 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:43.176 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:43.176 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:43.176 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:43.176 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:43.176 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:43.176 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:18:43.176 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:43.176 true 00:18:43.176 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:18:43.176 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:43.742 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:18:43.742 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:18:43.742 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:44.000 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:44.000 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:18:44.258 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:18:44.258 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:18:44.258 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:44.515 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:44.515 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:18:44.774 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:18:44.774 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:18:44.774 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:44.774 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:18:45.031 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:18:45.031 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:18:45.031 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:45.289 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:45.289 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:18:45.546 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:18:45.546 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:18:45.546 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:45.804 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:45.804 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:18:46.066 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:18:46.066 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:18:46.066 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:46.066 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:46.066 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:46.066 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:46.066 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:18:46.066 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:18:46.066 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:46.066 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:46.066 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:46.066 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:46.066 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:46.066 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:46.066 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:18:46.066 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:18:46.066 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:46.066 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:46.066 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:18:46.066 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.yoLjZdNKgL 00:18:46.066 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:46.066 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.5hW9wDMdT6 00:18:46.066 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:46.066 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:46.066 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.yoLjZdNKgL 00:18:46.066 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.5hW9wDMdT6 00:18:46.066 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:46.336 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:18:46.902 [2024-07-22 18:25:58.788921] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:47.161 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.yoLjZdNKgL 00:18:47.161 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.yoLjZdNKgL 00:18:47.161 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:47.161 [2024-07-22 18:25:59.136509] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:47.161 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:47.726 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:47.726 [2024-07-22 18:25:59.652667] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:47.726 [2024-07-22 18:25:59.653029] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:47.726 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:47.985 malloc0 00:18:47.985 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:48.242 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.yoLjZdNKgL 00:18:48.499 [2024-07-22 18:26:00.444276] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:48.499 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.yoLjZdNKgL 00:19:00.696 Initializing NVMe Controllers 00:19:00.696 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:00.696 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:00.696 Initialization complete. Launching workers. 00:19:00.696 ======================================================== 00:19:00.696 Latency(us) 00:19:00.696 Device Information : IOPS MiB/s Average min max 00:19:00.696 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6422.25 25.09 9969.09 1695.18 14541.83 00:19:00.696 ======================================================== 00:19:00.696 Total : 6422.25 25.09 9969.09 1695.18 14541.83 00:19:00.696 00:19:00.696 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.yoLjZdNKgL 00:19:00.696 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:00.696 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:00.696 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:00.696 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.yoLjZdNKgL' 00:19:00.696 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:00.696 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=76887 00:19:00.696 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:00.696 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:00.696 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 76887 /var/tmp/bdevperf.sock 00:19:00.696 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 76887 ']' 00:19:00.696 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:00.696 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:00.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:00.696 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:00.696 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:00.696 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:00.696 [2024-07-22 18:26:10.900677] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:00.696 [2024-07-22 18:26:10.900875] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76887 ] 00:19:00.696 [2024-07-22 18:26:11.080454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.696 [2024-07-22 18:26:11.407023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:00.696 [2024-07-22 18:26:11.616490] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:00.696 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:00.696 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:00.696 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.yoLjZdNKgL 00:19:00.696 [2024-07-22 18:26:12.075299] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:00.696 [2024-07-22 18:26:12.075622] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:00.696 TLSTESTn1 00:19:00.696 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:00.696 Running I/O for 10 seconds... 00:19:10.670 00:19:10.670 Latency(us) 00:19:10.670 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:10.670 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:10.670 Verification LBA range: start 0x0 length 0x2000 00:19:10.670 TLSTESTn1 : 10.02 2833.29 11.07 0.00 0.00 45091.44 7566.43 35746.91 00:19:10.670 =================================================================================================================== 00:19:10.670 Total : 2833.29 11.07 0.00 0.00 45091.44 7566.43 35746.91 00:19:10.670 0 00:19:10.670 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:10.670 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 76887 00:19:10.670 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 76887 ']' 00:19:10.670 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 76887 00:19:10.670 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:10.670 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:10.670 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76887 00:19:10.670 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:10.670 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:10.670 killing process with pid 76887 00:19:10.670 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76887' 00:19:10.670 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 76887 00:19:10.670 Received shutdown signal, test time was about 10.000000 seconds 00:19:10.670 00:19:10.671 Latency(us) 00:19:10.671 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:10.671 =================================================================================================================== 00:19:10.671 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:10.671 [2024-07-22 18:26:22.340654] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:10.671 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 76887 00:19:11.604 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5hW9wDMdT6 00:19:11.604 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:11.604 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5hW9wDMdT6 00:19:11.604 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:11.604 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:11.604 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:11.604 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:11.604 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5hW9wDMdT6 00:19:11.604 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:11.604 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:11.604 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:11.604 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.5hW9wDMdT6' 00:19:11.604 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:11.604 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=77028 00:19:11.604 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:11.604 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:11.604 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 77028 /var/tmp/bdevperf.sock 00:19:11.604 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 77028 ']' 00:19:11.604 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:11.604 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:11.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:11.604 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:11.604 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:11.604 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:11.862 [2024-07-22 18:26:23.669957] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:11.862 [2024-07-22 18:26:23.670131] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77028 ] 00:19:11.862 [2024-07-22 18:26:23.843619] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.120 [2024-07-22 18:26:24.083033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:12.378 [2024-07-22 18:26:24.283293] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:12.636 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:12.636 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:12.636 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5hW9wDMdT6 00:19:12.893 [2024-07-22 18:26:24.787193] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:12.894 [2024-07-22 18:26:24.787392] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:12.894 [2024-07-22 18:26:24.797609] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:12.894 [2024-07-22 18:26:24.798360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (107): Transport endpoint is not connected 00:19:12.894 [2024-07-22 18:26:24.799333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:19:12.894 [2024-07-22 18:26:24.800324] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:12.894 [2024-07-22 18:26:24.800368] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:12.894 [2024-07-22 18:26:24.800387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:12.894 request: 00:19:12.894 { 00:19:12.894 "name": "TLSTEST", 00:19:12.894 "trtype": "tcp", 00:19:12.894 "traddr": "10.0.0.2", 00:19:12.894 "adrfam": "ipv4", 00:19:12.894 "trsvcid": "4420", 00:19:12.894 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:12.894 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:12.894 "prchk_reftag": false, 00:19:12.894 "prchk_guard": false, 00:19:12.894 "hdgst": false, 00:19:12.894 "ddgst": false, 00:19:12.894 "psk": "/tmp/tmp.5hW9wDMdT6", 00:19:12.894 "method": "bdev_nvme_attach_controller", 00:19:12.894 "req_id": 1 00:19:12.894 } 00:19:12.894 Got JSON-RPC error response 00:19:12.894 response: 00:19:12.894 { 00:19:12.894 "code": -5, 00:19:12.894 "message": "Input/output error" 00:19:12.894 } 00:19:12.894 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 77028 00:19:12.894 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 77028 ']' 00:19:12.894 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 77028 00:19:12.894 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:12.894 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:12.894 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77028 00:19:12.894 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:12.894 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:12.894 killing process with pid 77028 00:19:12.894 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77028' 00:19:12.894 Received shutdown signal, test time was about 10.000000 seconds 00:19:12.894 00:19:12.894 Latency(us) 00:19:12.894 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:12.894 =================================================================================================================== 00:19:12.894 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:12.894 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 77028 00:19:12.894 [2024-07-22 18:26:24.842990] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:12.894 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 77028 00:19:14.298 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:14.298 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:14.298 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:14.298 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:14.298 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:14.298 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.yoLjZdNKgL 00:19:14.298 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:14.298 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.yoLjZdNKgL 00:19:14.298 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:14.298 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:14.298 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:14.298 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:14.298 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.yoLjZdNKgL 00:19:14.298 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:14.298 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:14.298 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:14.298 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.yoLjZdNKgL' 00:19:14.298 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:14.298 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=77062 00:19:14.298 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:14.298 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:14.298 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 77062 /var/tmp/bdevperf.sock 00:19:14.298 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 77062 ']' 00:19:14.298 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:14.298 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:14.298 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:14.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:14.298 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:14.298 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:14.298 [2024-07-22 18:26:26.029595] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:14.298 [2024-07-22 18:26:26.029813] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77062 ] 00:19:14.298 [2024-07-22 18:26:26.205656] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.556 [2024-07-22 18:26:26.446245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:14.812 [2024-07-22 18:26:26.646463] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:15.069 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:15.069 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:15.069 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.yoLjZdNKgL 00:19:15.327 [2024-07-22 18:26:27.173472] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:15.328 [2024-07-22 18:26:27.173660] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:15.328 [2024-07-22 18:26:27.187618] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:15.328 [2024-07-22 18:26:27.187677] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:15.328 [2024-07-22 18:26:27.187769] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:15.328 [2024-07-22 18:26:27.188501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (107): Transport endpoint is not connected 00:19:15.328 [2024-07-22 18:26:27.189473] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:19:15.328 [2024-07-22 18:26:27.190469] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:15.328 [2024-07-22 18:26:27.190543] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:15.328 [2024-07-22 18:26:27.190564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:15.328 request: 00:19:15.328 { 00:19:15.328 "name": "TLSTEST", 00:19:15.328 "trtype": "tcp", 00:19:15.328 "traddr": "10.0.0.2", 00:19:15.328 "adrfam": "ipv4", 00:19:15.328 "trsvcid": "4420", 00:19:15.328 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:15.328 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:15.328 "prchk_reftag": false, 00:19:15.328 "prchk_guard": false, 00:19:15.328 "hdgst": false, 00:19:15.328 "ddgst": false, 00:19:15.328 "psk": "/tmp/tmp.yoLjZdNKgL", 00:19:15.328 "method": "bdev_nvme_attach_controller", 00:19:15.328 "req_id": 1 00:19:15.328 } 00:19:15.328 Got JSON-RPC error response 00:19:15.328 response: 00:19:15.328 { 00:19:15.328 "code": -5, 00:19:15.328 "message": "Input/output error" 00:19:15.328 } 00:19:15.328 18:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 77062 00:19:15.328 18:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 77062 ']' 00:19:15.328 18:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 77062 00:19:15.328 18:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:15.328 18:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:15.328 18:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77062 00:19:15.328 18:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:15.328 18:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:15.328 killing process with pid 77062 00:19:15.328 18:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77062' 00:19:15.328 Received shutdown signal, test time was about 10.000000 seconds 00:19:15.328 00:19:15.328 Latency(us) 00:19:15.328 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:15.328 =================================================================================================================== 00:19:15.328 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:15.328 18:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 77062 00:19:15.328 [2024-07-22 18:26:27.238974] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:15.328 18:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 77062 00:19:16.702 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:16.702 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:16.702 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:16.702 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:16.702 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:16.703 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.yoLjZdNKgL 00:19:16.703 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:16.703 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.yoLjZdNKgL 00:19:16.703 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:16.703 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:16.703 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:16.703 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:16.703 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.yoLjZdNKgL 00:19:16.703 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:16.703 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:16.703 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:16.703 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.yoLjZdNKgL' 00:19:16.703 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:16.703 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=77097 00:19:16.703 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:16.703 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:16.703 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 77097 /var/tmp/bdevperf.sock 00:19:16.703 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 77097 ']' 00:19:16.703 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:16.703 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:16.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:16.703 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:16.703 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:16.703 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:16.703 [2024-07-22 18:26:28.524155] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:16.703 [2024-07-22 18:26:28.524317] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77097 ] 00:19:16.703 [2024-07-22 18:26:28.688595] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.962 [2024-07-22 18:26:28.926835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:17.220 [2024-07-22 18:26:29.132681] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:17.787 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:17.787 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:17.787 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.yoLjZdNKgL 00:19:17.787 [2024-07-22 18:26:29.711034] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:17.787 [2024-07-22 18:26:29.711270] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:17.787 [2024-07-22 18:26:29.720873] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:17.787 [2024-07-22 18:26:29.720938] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:17.787 [2024-07-22 18:26:29.721021] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:17.787 [2024-07-22 18:26:29.721061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (107): Transport endpoint is not connected 00:19:17.787 [2024-07-22 18:26:29.722034] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:19:17.787 [2024-07-22 18:26:29.723025] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:17.787 [2024-07-22 18:26:29.723075] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:17.787 [2024-07-22 18:26:29.723121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:17.787 request: 00:19:17.787 { 00:19:17.787 "name": "TLSTEST", 00:19:17.787 "trtype": "tcp", 00:19:17.787 "traddr": "10.0.0.2", 00:19:17.787 "adrfam": "ipv4", 00:19:17.787 "trsvcid": "4420", 00:19:17.787 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:17.787 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:17.787 "prchk_reftag": false, 00:19:17.787 "prchk_guard": false, 00:19:17.787 "hdgst": false, 00:19:17.787 "ddgst": false, 00:19:17.787 "psk": "/tmp/tmp.yoLjZdNKgL", 00:19:17.787 "method": "bdev_nvme_attach_controller", 00:19:17.787 "req_id": 1 00:19:17.787 } 00:19:17.787 Got JSON-RPC error response 00:19:17.787 response: 00:19:17.787 { 00:19:17.787 "code": -5, 00:19:17.787 "message": "Input/output error" 00:19:17.787 } 00:19:17.787 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 77097 00:19:17.787 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 77097 ']' 00:19:17.787 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 77097 00:19:17.787 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:17.787 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:17.787 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77097 00:19:17.787 killing process with pid 77097 00:19:17.787 Received shutdown signal, test time was about 10.000000 seconds 00:19:17.787 00:19:17.787 Latency(us) 00:19:17.787 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.787 =================================================================================================================== 00:19:17.787 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:17.787 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:17.788 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:17.788 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77097' 00:19:17.788 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 77097 00:19:17.788 [2024-07-22 18:26:29.775267] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:17.788 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 77097 00:19:19.166 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:19.166 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:19.166 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:19.166 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:19.166 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:19.166 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:19.166 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:19.166 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:19.166 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:19.166 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:19.166 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:19.166 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:19.166 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:19.166 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:19.166 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:19.166 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:19.166 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:19.166 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:19.166 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=77140 00:19:19.166 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:19.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:19.166 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 77140 /var/tmp/bdevperf.sock 00:19:19.166 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 77140 ']' 00:19:19.166 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:19.166 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:19.166 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:19.166 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:19.166 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:19.166 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:19.166 [2024-07-22 18:26:30.984850] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:19.166 [2024-07-22 18:26:30.985029] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77140 ] 00:19:19.166 [2024-07-22 18:26:31.161899] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.425 [2024-07-22 18:26:31.406863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:19.684 [2024-07-22 18:26:31.612008] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:19.942 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:19.942 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:19.942 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:20.201 [2024-07-22 18:26:32.118653] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:20.201 [2024-07-22 18:26:32.120876] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:19:20.201 [2024-07-22 18:26:32.121872] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:20.201 [2024-07-22 18:26:32.121909] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:20.201 [2024-07-22 18:26:32.121933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:20.201 request: 00:19:20.201 { 00:19:20.201 "name": "TLSTEST", 00:19:20.201 "trtype": "tcp", 00:19:20.201 "traddr": "10.0.0.2", 00:19:20.201 "adrfam": "ipv4", 00:19:20.201 "trsvcid": "4420", 00:19:20.201 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:20.201 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:20.201 "prchk_reftag": false, 00:19:20.201 "prchk_guard": false, 00:19:20.201 "hdgst": false, 00:19:20.201 "ddgst": false, 00:19:20.201 "method": "bdev_nvme_attach_controller", 00:19:20.201 "req_id": 1 00:19:20.201 } 00:19:20.201 Got JSON-RPC error response 00:19:20.201 response: 00:19:20.201 { 00:19:20.201 "code": -5, 00:19:20.201 "message": "Input/output error" 00:19:20.201 } 00:19:20.201 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 77140 00:19:20.201 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 77140 ']' 00:19:20.201 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 77140 00:19:20.201 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:20.201 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:20.201 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77140 00:19:20.201 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:20.201 killing process with pid 77140 00:19:20.201 Received shutdown signal, test time was about 10.000000 seconds 00:19:20.201 00:19:20.201 Latency(us) 00:19:20.201 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:20.201 =================================================================================================================== 00:19:20.201 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:20.201 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:20.201 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77140' 00:19:20.201 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 77140 00:19:20.201 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 77140 00:19:21.576 18:26:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:21.576 18:26:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:21.576 18:26:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:21.576 18:26:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:21.576 18:26:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:21.576 18:26:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 76653 00:19:21.576 18:26:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 76653 ']' 00:19:21.576 18:26:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 76653 00:19:21.576 18:26:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:21.576 18:26:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:21.576 18:26:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76653 00:19:21.576 killing process with pid 76653 00:19:21.576 18:26:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:21.576 18:26:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:21.576 18:26:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76653' 00:19:21.576 18:26:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 76653 00:19:21.576 [2024-07-22 18:26:33.377316] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:21.576 18:26:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 76653 00:19:22.950 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:22.950 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:22.950 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:22.950 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:22.950 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:22.950 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:19:22.950 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:22.950 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:22.950 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:19:22.950 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.jij0jggWrV 00:19:22.950 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:22.950 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.jij0jggWrV 00:19:22.950 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:19:22.950 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:22.950 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:22.950 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:22.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:22.950 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=77204 00:19:22.950 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 77204 00:19:22.950 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 77204 ']' 00:19:22.950 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:22.950 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:22.950 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:22.950 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:22.950 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:22.950 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:22.950 [2024-07-22 18:26:34.926666] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:22.950 [2024-07-22 18:26:34.926818] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:23.208 [2024-07-22 18:26:35.093856] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.466 [2024-07-22 18:26:35.348681] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:23.466 [2024-07-22 18:26:35.348754] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:23.466 [2024-07-22 18:26:35.348789] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:23.466 [2024-07-22 18:26:35.348811] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:23.466 [2024-07-22 18:26:35.348823] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:23.466 [2024-07-22 18:26:35.348878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:23.724 [2024-07-22 18:26:35.562030] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:23.982 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:23.982 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:23.982 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:23.982 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:23.982 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:23.982 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:23.982 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.jij0jggWrV 00:19:23.982 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.jij0jggWrV 00:19:23.982 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:24.240 [2024-07-22 18:26:36.124782] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:24.240 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:24.499 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:24.756 [2024-07-22 18:26:36.648903] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:24.756 [2024-07-22 18:26:36.649262] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:24.756 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:25.015 malloc0 00:19:25.015 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:25.274 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.jij0jggWrV 00:19:25.532 [2024-07-22 18:26:37.404991] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:25.532 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jij0jggWrV 00:19:25.532 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:25.532 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:25.532 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:25.532 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.jij0jggWrV' 00:19:25.532 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:25.532 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:25.532 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=77253 00:19:25.532 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:25.532 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 77253 /var/tmp/bdevperf.sock 00:19:25.532 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 77253 ']' 00:19:25.532 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:25.532 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:25.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:25.532 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:25.532 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:25.532 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:25.532 [2024-07-22 18:26:37.515903] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:25.532 [2024-07-22 18:26:37.516070] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77253 ] 00:19:25.791 [2024-07-22 18:26:37.677587] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.050 [2024-07-22 18:26:37.926660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:26.376 [2024-07-22 18:26:38.134243] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:26.647 18:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:26.647 18:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:26.647 18:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.jij0jggWrV 00:19:26.915 [2024-07-22 18:26:38.708479] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:26.915 [2024-07-22 18:26:38.708665] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:26.915 TLSTESTn1 00:19:26.915 18:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:27.174 Running I/O for 10 seconds... 00:19:37.142 00:19:37.142 Latency(us) 00:19:37.142 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:37.142 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:37.142 Verification LBA range: start 0x0 length 0x2000 00:19:37.142 TLSTESTn1 : 10.02 2848.53 11.13 0.00 0.00 44842.77 9175.04 40274.85 00:19:37.142 =================================================================================================================== 00:19:37.142 Total : 2848.53 11.13 0.00 0.00 44842.77 9175.04 40274.85 00:19:37.142 0 00:19:37.142 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:37.142 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 77253 00:19:37.142 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 77253 ']' 00:19:37.142 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 77253 00:19:37.142 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:37.142 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:37.142 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77253 00:19:37.142 killing process with pid 77253 00:19:37.142 Received shutdown signal, test time was about 10.000000 seconds 00:19:37.142 00:19:37.142 Latency(us) 00:19:37.142 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:37.142 =================================================================================================================== 00:19:37.142 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:37.142 18:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:37.142 18:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:37.142 18:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77253' 00:19:37.142 18:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 77253 00:19:37.142 [2024-07-22 18:26:49.004460] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:37.142 18:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 77253 00:19:38.518 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.jij0jggWrV 00:19:38.518 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jij0jggWrV 00:19:38.518 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:38.518 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jij0jggWrV 00:19:38.518 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:38.518 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:38.518 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:38.518 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:38.518 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jij0jggWrV 00:19:38.518 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:38.518 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:38.518 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:38.518 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.jij0jggWrV' 00:19:38.518 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:38.518 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=77396 00:19:38.518 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:38.518 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:38.518 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 77396 /var/tmp/bdevperf.sock 00:19:38.518 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 77396 ']' 00:19:38.518 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:38.518 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:38.518 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:38.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:38.518 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:38.518 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:38.518 [2024-07-22 18:26:50.270241] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:38.518 [2024-07-22 18:26:50.270660] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77396 ] 00:19:38.519 [2024-07-22 18:26:50.450328] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.812 [2024-07-22 18:26:50.692961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:39.070 [2024-07-22 18:26:50.893610] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:39.329 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:39.329 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:39.329 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.jij0jggWrV 00:19:39.588 [2024-07-22 18:26:51.446073] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:39.588 [2024-07-22 18:26:51.446169] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:39.588 [2024-07-22 18:26:51.446189] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.jij0jggWrV 00:19:39.588 request: 00:19:39.588 { 00:19:39.588 "name": "TLSTEST", 00:19:39.588 "trtype": "tcp", 00:19:39.588 "traddr": "10.0.0.2", 00:19:39.588 "adrfam": "ipv4", 00:19:39.588 "trsvcid": "4420", 00:19:39.588 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:39.588 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:39.588 "prchk_reftag": false, 00:19:39.588 "prchk_guard": false, 00:19:39.588 "hdgst": false, 00:19:39.588 "ddgst": false, 00:19:39.588 "psk": "/tmp/tmp.jij0jggWrV", 00:19:39.588 "method": "bdev_nvme_attach_controller", 00:19:39.588 "req_id": 1 00:19:39.588 } 00:19:39.588 Got JSON-RPC error response 00:19:39.588 response: 00:19:39.588 { 00:19:39.588 "code": -1, 00:19:39.588 "message": "Operation not permitted" 00:19:39.588 } 00:19:39.588 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 77396 00:19:39.588 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 77396 ']' 00:19:39.588 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 77396 00:19:39.588 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:39.588 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:39.588 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77396 00:19:39.588 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:39.588 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:39.588 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77396' 00:19:39.588 killing process with pid 77396 00:19:39.588 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 77396 00:19:39.588 Received shutdown signal, test time was about 10.000000 seconds 00:19:39.588 00:19:39.588 Latency(us) 00:19:39.588 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:39.588 =================================================================================================================== 00:19:39.588 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:39.588 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 77396 00:19:40.965 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:40.965 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:40.965 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:40.965 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:40.965 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:40.965 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 77204 00:19:40.965 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 77204 ']' 00:19:40.965 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 77204 00:19:40.965 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:40.965 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:40.965 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77204 00:19:40.965 killing process with pid 77204 00:19:40.965 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:40.965 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:40.965 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77204' 00:19:40.965 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 77204 00:19:40.965 [2024-07-22 18:26:52.807841] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:40.965 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 77204 00:19:42.340 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:19:42.340 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:42.340 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:42.340 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:42.340 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=77458 00:19:42.340 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:42.340 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 77458 00:19:42.340 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 77458 ']' 00:19:42.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:42.340 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:42.340 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:42.340 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:42.340 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:42.340 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:42.340 [2024-07-22 18:26:54.245567] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:42.340 [2024-07-22 18:26:54.245752] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:42.598 [2024-07-22 18:26:54.410068] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.857 [2024-07-22 18:26:54.650003] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:42.857 [2024-07-22 18:26:54.650075] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:42.857 [2024-07-22 18:26:54.650093] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:42.857 [2024-07-22 18:26:54.650122] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:42.857 [2024-07-22 18:26:54.650133] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:42.857 [2024-07-22 18:26:54.650183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:42.857 [2024-07-22 18:26:54.856098] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:43.424 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:43.424 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:43.424 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:43.424 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:43.424 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:43.424 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:43.424 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.jij0jggWrV 00:19:43.424 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:43.424 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.jij0jggWrV 00:19:43.424 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:19:43.424 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:43.424 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:19:43.424 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:43.424 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.jij0jggWrV 00:19:43.424 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.jij0jggWrV 00:19:43.424 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:43.683 [2024-07-22 18:26:55.450036] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:43.683 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:43.941 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:44.227 [2024-07-22 18:26:56.026350] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:44.227 [2024-07-22 18:26:56.026664] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:44.227 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:44.485 malloc0 00:19:44.485 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:44.744 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.jij0jggWrV 00:19:45.002 [2024-07-22 18:26:56.849417] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:45.002 [2024-07-22 18:26:56.849748] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:19:45.002 [2024-07-22 18:26:56.849924] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:45.002 request: 00:19:45.002 { 00:19:45.002 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:45.002 "host": "nqn.2016-06.io.spdk:host1", 00:19:45.002 "psk": "/tmp/tmp.jij0jggWrV", 00:19:45.002 "method": "nvmf_subsystem_add_host", 00:19:45.002 "req_id": 1 00:19:45.002 } 00:19:45.002 Got JSON-RPC error response 00:19:45.002 response: 00:19:45.002 { 00:19:45.002 "code": -32603, 00:19:45.002 "message": "Internal error" 00:19:45.002 } 00:19:45.002 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:45.002 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:45.002 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:45.002 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:45.002 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 77458 00:19:45.002 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 77458 ']' 00:19:45.002 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 77458 00:19:45.002 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:45.002 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:45.002 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77458 00:19:45.002 killing process with pid 77458 00:19:45.002 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:45.002 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:45.002 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77458' 00:19:45.002 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 77458 00:19:45.002 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 77458 00:19:46.380 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.jij0jggWrV 00:19:46.380 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:19:46.380 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:46.380 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:46.380 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:46.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:46.380 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=77533 00:19:46.380 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:46.380 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 77533 00:19:46.380 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 77533 ']' 00:19:46.380 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:46.380 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:46.380 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:46.380 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:46.380 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:46.380 [2024-07-22 18:26:58.374990] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:46.380 [2024-07-22 18:26:58.375480] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:46.638 [2024-07-22 18:26:58.555609] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.897 [2024-07-22 18:26:58.808107] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:46.897 [2024-07-22 18:26:58.808388] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:46.897 [2024-07-22 18:26:58.808557] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:46.897 [2024-07-22 18:26:58.808844] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:46.897 [2024-07-22 18:26:58.808864] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:46.897 [2024-07-22 18:26:58.808919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:47.156 [2024-07-22 18:26:59.020652] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:47.414 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:47.414 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:47.414 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:47.414 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:47.414 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:47.414 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:47.414 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.jij0jggWrV 00:19:47.414 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.jij0jggWrV 00:19:47.414 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:47.673 [2024-07-22 18:26:59.645629] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:47.673 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:47.932 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:48.190 [2024-07-22 18:27:00.169835] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:48.190 [2024-07-22 18:27:00.170299] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:48.190 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:48.448 malloc0 00:19:48.706 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:48.706 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.jij0jggWrV 00:19:48.965 [2024-07-22 18:27:00.922300] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:48.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:48.965 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=77583 00:19:48.965 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:48.965 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:48.965 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 77583 /var/tmp/bdevperf.sock 00:19:48.965 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 77583 ']' 00:19:48.965 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:48.965 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:48.965 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:48.965 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:48.965 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:49.223 [2024-07-22 18:27:01.026875] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:49.223 [2024-07-22 18:27:01.027326] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77583 ] 00:19:49.223 [2024-07-22 18:27:01.187164] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.482 [2024-07-22 18:27:01.428390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:49.740 [2024-07-22 18:27:01.631906] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:49.999 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:49.999 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:49.999 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.jij0jggWrV 00:19:50.257 [2024-07-22 18:27:02.152290] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:50.257 [2024-07-22 18:27:02.152484] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:50.257 TLSTESTn1 00:19:50.257 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:19:50.825 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:19:50.825 "subsystems": [ 00:19:50.825 { 00:19:50.825 "subsystem": "keyring", 00:19:50.825 "config": [] 00:19:50.825 }, 00:19:50.825 { 00:19:50.825 "subsystem": "iobuf", 00:19:50.825 "config": [ 00:19:50.825 { 00:19:50.825 "method": "iobuf_set_options", 00:19:50.825 "params": { 00:19:50.825 "small_pool_count": 8192, 00:19:50.825 "large_pool_count": 1024, 00:19:50.825 "small_bufsize": 8192, 00:19:50.825 "large_bufsize": 135168 00:19:50.825 } 00:19:50.825 } 00:19:50.825 ] 00:19:50.825 }, 00:19:50.825 { 00:19:50.825 "subsystem": "sock", 00:19:50.825 "config": [ 00:19:50.825 { 00:19:50.825 "method": "sock_set_default_impl", 00:19:50.825 "params": { 00:19:50.825 "impl_name": "uring" 00:19:50.825 } 00:19:50.825 }, 00:19:50.825 { 00:19:50.825 "method": "sock_impl_set_options", 00:19:50.825 "params": { 00:19:50.825 "impl_name": "ssl", 00:19:50.825 "recv_buf_size": 4096, 00:19:50.825 "send_buf_size": 4096, 00:19:50.825 "enable_recv_pipe": true, 00:19:50.825 "enable_quickack": false, 00:19:50.825 "enable_placement_id": 0, 00:19:50.825 "enable_zerocopy_send_server": true, 00:19:50.825 "enable_zerocopy_send_client": false, 00:19:50.825 "zerocopy_threshold": 0, 00:19:50.825 "tls_version": 0, 00:19:50.825 "enable_ktls": false 00:19:50.825 } 00:19:50.825 }, 00:19:50.825 { 00:19:50.825 "method": "sock_impl_set_options", 00:19:50.825 "params": { 00:19:50.825 "impl_name": "posix", 00:19:50.825 "recv_buf_size": 2097152, 00:19:50.825 "send_buf_size": 2097152, 00:19:50.825 "enable_recv_pipe": true, 00:19:50.825 "enable_quickack": false, 00:19:50.825 "enable_placement_id": 0, 00:19:50.825 "enable_zerocopy_send_server": true, 00:19:50.825 "enable_zerocopy_send_client": false, 00:19:50.825 "zerocopy_threshold": 0, 00:19:50.825 "tls_version": 0, 00:19:50.825 "enable_ktls": false 00:19:50.825 } 00:19:50.825 }, 00:19:50.825 { 00:19:50.825 "method": "sock_impl_set_options", 00:19:50.825 "params": { 00:19:50.825 "impl_name": "uring", 00:19:50.825 "recv_buf_size": 2097152, 00:19:50.825 "send_buf_size": 2097152, 00:19:50.825 "enable_recv_pipe": true, 00:19:50.825 "enable_quickack": false, 00:19:50.825 "enable_placement_id": 0, 00:19:50.825 "enable_zerocopy_send_server": false, 00:19:50.825 "enable_zerocopy_send_client": false, 00:19:50.825 "zerocopy_threshold": 0, 00:19:50.825 "tls_version": 0, 00:19:50.825 "enable_ktls": false 00:19:50.825 } 00:19:50.825 } 00:19:50.825 ] 00:19:50.825 }, 00:19:50.825 { 00:19:50.825 "subsystem": "vmd", 00:19:50.825 "config": [] 00:19:50.825 }, 00:19:50.825 { 00:19:50.825 "subsystem": "accel", 00:19:50.825 "config": [ 00:19:50.825 { 00:19:50.825 "method": "accel_set_options", 00:19:50.825 "params": { 00:19:50.825 "small_cache_size": 128, 00:19:50.825 "large_cache_size": 16, 00:19:50.825 "task_count": 2048, 00:19:50.825 "sequence_count": 2048, 00:19:50.825 "buf_count": 2048 00:19:50.825 } 00:19:50.825 } 00:19:50.825 ] 00:19:50.825 }, 00:19:50.825 { 00:19:50.825 "subsystem": "bdev", 00:19:50.825 "config": [ 00:19:50.825 { 00:19:50.825 "method": "bdev_set_options", 00:19:50.825 "params": { 00:19:50.825 "bdev_io_pool_size": 65535, 00:19:50.825 "bdev_io_cache_size": 256, 00:19:50.825 "bdev_auto_examine": true, 00:19:50.825 "iobuf_small_cache_size": 128, 00:19:50.825 "iobuf_large_cache_size": 16 00:19:50.825 } 00:19:50.825 }, 00:19:50.825 { 00:19:50.825 "method": "bdev_raid_set_options", 00:19:50.825 "params": { 00:19:50.825 "process_window_size_kb": 1024, 00:19:50.825 "process_max_bandwidth_mb_sec": 0 00:19:50.825 } 00:19:50.825 }, 00:19:50.825 { 00:19:50.825 "method": "bdev_iscsi_set_options", 00:19:50.825 "params": { 00:19:50.825 "timeout_sec": 30 00:19:50.825 } 00:19:50.825 }, 00:19:50.825 { 00:19:50.825 "method": "bdev_nvme_set_options", 00:19:50.825 "params": { 00:19:50.825 "action_on_timeout": "none", 00:19:50.825 "timeout_us": 0, 00:19:50.825 "timeout_admin_us": 0, 00:19:50.825 "keep_alive_timeout_ms": 10000, 00:19:50.825 "arbitration_burst": 0, 00:19:50.825 "low_priority_weight": 0, 00:19:50.825 "medium_priority_weight": 0, 00:19:50.825 "high_priority_weight": 0, 00:19:50.825 "nvme_adminq_poll_period_us": 10000, 00:19:50.825 "nvme_ioq_poll_period_us": 0, 00:19:50.825 "io_queue_requests": 0, 00:19:50.825 "delay_cmd_submit": true, 00:19:50.825 "transport_retry_count": 4, 00:19:50.825 "bdev_retry_count": 3, 00:19:50.825 "transport_ack_timeout": 0, 00:19:50.825 "ctrlr_loss_timeout_sec": 0, 00:19:50.825 "reconnect_delay_sec": 0, 00:19:50.825 "fast_io_fail_timeout_sec": 0, 00:19:50.825 "disable_auto_failback": false, 00:19:50.825 "generate_uuids": false, 00:19:50.825 "transport_tos": 0, 00:19:50.825 "nvme_error_stat": false, 00:19:50.825 "rdma_srq_size": 0, 00:19:50.825 "io_path_stat": false, 00:19:50.825 "allow_accel_sequence": false, 00:19:50.825 "rdma_max_cq_size": 0, 00:19:50.825 "rdma_cm_event_timeout_ms": 0, 00:19:50.825 "dhchap_digests": [ 00:19:50.825 "sha256", 00:19:50.825 "sha384", 00:19:50.825 "sha512" 00:19:50.825 ], 00:19:50.825 "dhchap_dhgroups": [ 00:19:50.825 "null", 00:19:50.825 "ffdhe2048", 00:19:50.825 "ffdhe3072", 00:19:50.825 "ffdhe4096", 00:19:50.825 "ffdhe6144", 00:19:50.825 "ffdhe8192" 00:19:50.825 ] 00:19:50.825 } 00:19:50.825 }, 00:19:50.825 { 00:19:50.825 "method": "bdev_nvme_set_hotplug", 00:19:50.825 "params": { 00:19:50.825 "period_us": 100000, 00:19:50.825 "enable": false 00:19:50.825 } 00:19:50.825 }, 00:19:50.825 { 00:19:50.825 "method": "bdev_malloc_create", 00:19:50.825 "params": { 00:19:50.825 "name": "malloc0", 00:19:50.825 "num_blocks": 8192, 00:19:50.825 "block_size": 4096, 00:19:50.825 "physical_block_size": 4096, 00:19:50.825 "uuid": "f597680e-65e6-4b93-83ef-d8b07ff6598a", 00:19:50.825 "optimal_io_boundary": 0, 00:19:50.826 "md_size": 0, 00:19:50.826 "dif_type": 0, 00:19:50.826 "dif_is_head_of_md": false, 00:19:50.826 "dif_pi_format": 0 00:19:50.826 } 00:19:50.826 }, 00:19:50.826 { 00:19:50.826 "method": "bdev_wait_for_examine" 00:19:50.826 } 00:19:50.826 ] 00:19:50.826 }, 00:19:50.826 { 00:19:50.826 "subsystem": "nbd", 00:19:50.826 "config": [] 00:19:50.826 }, 00:19:50.826 { 00:19:50.826 "subsystem": "scheduler", 00:19:50.826 "config": [ 00:19:50.826 { 00:19:50.826 "method": "framework_set_scheduler", 00:19:50.826 "params": { 00:19:50.826 "name": "static" 00:19:50.826 } 00:19:50.826 } 00:19:50.826 ] 00:19:50.826 }, 00:19:50.826 { 00:19:50.826 "subsystem": "nvmf", 00:19:50.826 "config": [ 00:19:50.826 { 00:19:50.826 "method": "nvmf_set_config", 00:19:50.826 "params": { 00:19:50.826 "discovery_filter": "match_any", 00:19:50.826 "admin_cmd_passthru": { 00:19:50.826 "identify_ctrlr": false 00:19:50.826 } 00:19:50.826 } 00:19:50.826 }, 00:19:50.826 { 00:19:50.826 "method": "nvmf_set_max_subsystems", 00:19:50.826 "params": { 00:19:50.826 "max_subsystems": 1024 00:19:50.826 } 00:19:50.826 }, 00:19:50.826 { 00:19:50.826 "method": "nvmf_set_crdt", 00:19:50.826 "params": { 00:19:50.826 "crdt1": 0, 00:19:50.826 "crdt2": 0, 00:19:50.826 "crdt3": 0 00:19:50.826 } 00:19:50.826 }, 00:19:50.826 { 00:19:50.826 "method": "nvmf_create_transport", 00:19:50.826 "params": { 00:19:50.826 "trtype": "TCP", 00:19:50.826 "max_queue_depth": 128, 00:19:50.826 "max_io_qpairs_per_ctrlr": 127, 00:19:50.826 "in_capsule_data_size": 4096, 00:19:50.826 "max_io_size": 131072, 00:19:50.826 "io_unit_size": 131072, 00:19:50.826 "max_aq_depth": 128, 00:19:50.826 "num_shared_buffers": 511, 00:19:50.826 "buf_cache_size": 4294967295, 00:19:50.826 "dif_insert_or_strip": false, 00:19:50.826 "zcopy": false, 00:19:50.826 "c2h_success": false, 00:19:50.826 "sock_priority": 0, 00:19:50.826 "abort_timeout_sec": 1, 00:19:50.826 "ack_timeout": 0, 00:19:50.826 "data_wr_pool_size": 0 00:19:50.826 } 00:19:50.826 }, 00:19:50.826 { 00:19:50.826 "method": "nvmf_create_subsystem", 00:19:50.826 "params": { 00:19:50.826 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:50.826 "allow_any_host": false, 00:19:50.826 "serial_number": "SPDK00000000000001", 00:19:50.826 "model_number": "SPDK bdev Controller", 00:19:50.826 "max_namespaces": 10, 00:19:50.826 "min_cntlid": 1, 00:19:50.826 "max_cntlid": 65519, 00:19:50.826 "ana_reporting": false 00:19:50.826 } 00:19:50.826 }, 00:19:50.826 { 00:19:50.826 "method": "nvmf_subsystem_add_host", 00:19:50.826 "params": { 00:19:50.826 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:50.826 "host": "nqn.2016-06.io.spdk:host1", 00:19:50.826 "psk": "/tmp/tmp.jij0jggWrV" 00:19:50.826 } 00:19:50.826 }, 00:19:50.826 { 00:19:50.826 "method": "nvmf_subsystem_add_ns", 00:19:50.826 "params": { 00:19:50.826 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:50.826 "namespace": { 00:19:50.826 "nsid": 1, 00:19:50.826 "bdev_name": "malloc0", 00:19:50.826 "nguid": "F597680E65E64B9383EFD8B07FF6598A", 00:19:50.826 "uuid": "f597680e-65e6-4b93-83ef-d8b07ff6598a", 00:19:50.826 "no_auto_visible": false 00:19:50.826 } 00:19:50.826 } 00:19:50.826 }, 00:19:50.826 { 00:19:50.826 "method": "nvmf_subsystem_add_listener", 00:19:50.826 "params": { 00:19:50.826 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:50.826 "listen_address": { 00:19:50.826 "trtype": "TCP", 00:19:50.826 "adrfam": "IPv4", 00:19:50.826 "traddr": "10.0.0.2", 00:19:50.826 "trsvcid": "4420" 00:19:50.826 }, 00:19:50.826 "secure_channel": true 00:19:50.826 } 00:19:50.826 } 00:19:50.826 ] 00:19:50.826 } 00:19:50.826 ] 00:19:50.826 }' 00:19:50.826 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:51.085 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:19:51.085 "subsystems": [ 00:19:51.085 { 00:19:51.085 "subsystem": "keyring", 00:19:51.085 "config": [] 00:19:51.085 }, 00:19:51.085 { 00:19:51.085 "subsystem": "iobuf", 00:19:51.085 "config": [ 00:19:51.085 { 00:19:51.085 "method": "iobuf_set_options", 00:19:51.085 "params": { 00:19:51.085 "small_pool_count": 8192, 00:19:51.085 "large_pool_count": 1024, 00:19:51.085 "small_bufsize": 8192, 00:19:51.085 "large_bufsize": 135168 00:19:51.085 } 00:19:51.085 } 00:19:51.085 ] 00:19:51.085 }, 00:19:51.085 { 00:19:51.085 "subsystem": "sock", 00:19:51.085 "config": [ 00:19:51.085 { 00:19:51.085 "method": "sock_set_default_impl", 00:19:51.085 "params": { 00:19:51.085 "impl_name": "uring" 00:19:51.085 } 00:19:51.085 }, 00:19:51.085 { 00:19:51.085 "method": "sock_impl_set_options", 00:19:51.085 "params": { 00:19:51.085 "impl_name": "ssl", 00:19:51.085 "recv_buf_size": 4096, 00:19:51.085 "send_buf_size": 4096, 00:19:51.085 "enable_recv_pipe": true, 00:19:51.085 "enable_quickack": false, 00:19:51.085 "enable_placement_id": 0, 00:19:51.085 "enable_zerocopy_send_server": true, 00:19:51.085 "enable_zerocopy_send_client": false, 00:19:51.085 "zerocopy_threshold": 0, 00:19:51.085 "tls_version": 0, 00:19:51.085 "enable_ktls": false 00:19:51.085 } 00:19:51.085 }, 00:19:51.085 { 00:19:51.085 "method": "sock_impl_set_options", 00:19:51.085 "params": { 00:19:51.085 "impl_name": "posix", 00:19:51.085 "recv_buf_size": 2097152, 00:19:51.085 "send_buf_size": 2097152, 00:19:51.085 "enable_recv_pipe": true, 00:19:51.085 "enable_quickack": false, 00:19:51.085 "enable_placement_id": 0, 00:19:51.085 "enable_zerocopy_send_server": true, 00:19:51.085 "enable_zerocopy_send_client": false, 00:19:51.085 "zerocopy_threshold": 0, 00:19:51.085 "tls_version": 0, 00:19:51.085 "enable_ktls": false 00:19:51.085 } 00:19:51.085 }, 00:19:51.085 { 00:19:51.085 "method": "sock_impl_set_options", 00:19:51.085 "params": { 00:19:51.085 "impl_name": "uring", 00:19:51.085 "recv_buf_size": 2097152, 00:19:51.085 "send_buf_size": 2097152, 00:19:51.085 "enable_recv_pipe": true, 00:19:51.085 "enable_quickack": false, 00:19:51.085 "enable_placement_id": 0, 00:19:51.085 "enable_zerocopy_send_server": false, 00:19:51.085 "enable_zerocopy_send_client": false, 00:19:51.085 "zerocopy_threshold": 0, 00:19:51.085 "tls_version": 0, 00:19:51.085 "enable_ktls": false 00:19:51.085 } 00:19:51.085 } 00:19:51.085 ] 00:19:51.085 }, 00:19:51.085 { 00:19:51.085 "subsystem": "vmd", 00:19:51.085 "config": [] 00:19:51.085 }, 00:19:51.085 { 00:19:51.085 "subsystem": "accel", 00:19:51.085 "config": [ 00:19:51.085 { 00:19:51.085 "method": "accel_set_options", 00:19:51.085 "params": { 00:19:51.085 "small_cache_size": 128, 00:19:51.085 "large_cache_size": 16, 00:19:51.085 "task_count": 2048, 00:19:51.085 "sequence_count": 2048, 00:19:51.085 "buf_count": 2048 00:19:51.085 } 00:19:51.085 } 00:19:51.085 ] 00:19:51.085 }, 00:19:51.085 { 00:19:51.085 "subsystem": "bdev", 00:19:51.085 "config": [ 00:19:51.085 { 00:19:51.085 "method": "bdev_set_options", 00:19:51.085 "params": { 00:19:51.085 "bdev_io_pool_size": 65535, 00:19:51.085 "bdev_io_cache_size": 256, 00:19:51.085 "bdev_auto_examine": true, 00:19:51.085 "iobuf_small_cache_size": 128, 00:19:51.085 "iobuf_large_cache_size": 16 00:19:51.085 } 00:19:51.085 }, 00:19:51.085 { 00:19:51.086 "method": "bdev_raid_set_options", 00:19:51.086 "params": { 00:19:51.086 "process_window_size_kb": 1024, 00:19:51.086 "process_max_bandwidth_mb_sec": 0 00:19:51.086 } 00:19:51.086 }, 00:19:51.086 { 00:19:51.086 "method": "bdev_iscsi_set_options", 00:19:51.086 "params": { 00:19:51.086 "timeout_sec": 30 00:19:51.086 } 00:19:51.086 }, 00:19:51.086 { 00:19:51.086 "method": "bdev_nvme_set_options", 00:19:51.086 "params": { 00:19:51.086 "action_on_timeout": "none", 00:19:51.086 "timeout_us": 0, 00:19:51.086 "timeout_admin_us": 0, 00:19:51.086 "keep_alive_timeout_ms": 10000, 00:19:51.086 "arbitration_burst": 0, 00:19:51.086 "low_priority_weight": 0, 00:19:51.086 "medium_priority_weight": 0, 00:19:51.086 "high_priority_weight": 0, 00:19:51.086 "nvme_adminq_poll_period_us": 10000, 00:19:51.086 "nvme_ioq_poll_period_us": 0, 00:19:51.086 "io_queue_requests": 512, 00:19:51.086 "delay_cmd_submit": true, 00:19:51.086 "transport_retry_count": 4, 00:19:51.086 "bdev_retry_count": 3, 00:19:51.086 "transport_ack_timeout": 0, 00:19:51.086 "ctrlr_loss_timeout_sec": 0, 00:19:51.086 "reconnect_delay_sec": 0, 00:19:51.086 "fast_io_fail_timeout_sec": 0, 00:19:51.086 "disable_auto_failback": false, 00:19:51.086 "generate_uuids": false, 00:19:51.086 "transport_tos": 0, 00:19:51.086 "nvme_error_stat": false, 00:19:51.086 "rdma_srq_size": 0, 00:19:51.086 "io_path_stat": false, 00:19:51.086 "allow_accel_sequence": false, 00:19:51.086 "rdma_max_cq_size": 0, 00:19:51.086 "rdma_cm_event_timeout_ms": 0, 00:19:51.086 "dhchap_digests": [ 00:19:51.086 "sha256", 00:19:51.086 "sha384", 00:19:51.086 "sha512" 00:19:51.086 ], 00:19:51.086 "dhchap_dhgroups": [ 00:19:51.086 "null", 00:19:51.086 "ffdhe2048", 00:19:51.086 "ffdhe3072", 00:19:51.086 "ffdhe4096", 00:19:51.086 "ffdhe6144", 00:19:51.086 "ffdhe8192" 00:19:51.086 ] 00:19:51.086 } 00:19:51.086 }, 00:19:51.086 { 00:19:51.086 "method": "bdev_nvme_attach_controller", 00:19:51.086 "params": { 00:19:51.086 "name": "TLSTEST", 00:19:51.086 "trtype": "TCP", 00:19:51.086 "adrfam": "IPv4", 00:19:51.086 "traddr": "10.0.0.2", 00:19:51.086 "trsvcid": "4420", 00:19:51.086 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:51.086 "prchk_reftag": false, 00:19:51.086 "prchk_guard": false, 00:19:51.086 "ctrlr_loss_timeout_sec": 0, 00:19:51.086 "reconnect_delay_sec": 0, 00:19:51.086 "fast_io_fail_timeout_sec": 0, 00:19:51.086 "psk": "/tmp/tmp.jij0jggWrV", 00:19:51.086 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:51.086 "hdgst": false, 00:19:51.086 "ddgst": false 00:19:51.086 } 00:19:51.086 }, 00:19:51.086 { 00:19:51.086 "method": "bdev_nvme_set_hotplug", 00:19:51.086 "params": { 00:19:51.086 "period_us": 100000, 00:19:51.086 "enable": false 00:19:51.086 } 00:19:51.086 }, 00:19:51.086 { 00:19:51.086 "method": "bdev_wait_for_examine" 00:19:51.086 } 00:19:51.086 ] 00:19:51.086 }, 00:19:51.086 { 00:19:51.086 "subsystem": "nbd", 00:19:51.086 "config": [] 00:19:51.086 } 00:19:51.086 ] 00:19:51.086 }' 00:19:51.086 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 77583 00:19:51.086 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 77583 ']' 00:19:51.086 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 77583 00:19:51.086 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:51.086 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:51.086 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77583 00:19:51.086 killing process with pid 77583 00:19:51.086 Received shutdown signal, test time was about 10.000000 seconds 00:19:51.086 00:19:51.086 Latency(us) 00:19:51.086 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:51.086 =================================================================================================================== 00:19:51.086 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:51.086 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:51.086 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:51.086 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77583' 00:19:51.086 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 77583 00:19:51.086 [2024-07-22 18:27:02.936412] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:51.086 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 77583 00:19:52.037 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 77533 00:19:52.037 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 77533 ']' 00:19:52.037 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 77533 00:19:52.037 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:52.037 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:52.037 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77533 00:19:52.037 killing process with pid 77533 00:19:52.037 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:52.037 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:52.037 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77533' 00:19:52.037 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 77533 00:19:52.037 [2024-07-22 18:27:04.053480] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:52.037 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 77533 00:19:53.413 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:53.413 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:53.413 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:53.413 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.413 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:19:53.413 "subsystems": [ 00:19:53.413 { 00:19:53.413 "subsystem": "keyring", 00:19:53.413 "config": [] 00:19:53.413 }, 00:19:53.413 { 00:19:53.413 "subsystem": "iobuf", 00:19:53.413 "config": [ 00:19:53.413 { 00:19:53.413 "method": "iobuf_set_options", 00:19:53.413 "params": { 00:19:53.413 "small_pool_count": 8192, 00:19:53.413 "large_pool_count": 1024, 00:19:53.413 "small_bufsize": 8192, 00:19:53.413 "large_bufsize": 135168 00:19:53.413 } 00:19:53.413 } 00:19:53.413 ] 00:19:53.413 }, 00:19:53.413 { 00:19:53.413 "subsystem": "sock", 00:19:53.413 "config": [ 00:19:53.413 { 00:19:53.413 "method": "sock_set_default_impl", 00:19:53.413 "params": { 00:19:53.413 "impl_name": "uring" 00:19:53.413 } 00:19:53.413 }, 00:19:53.413 { 00:19:53.413 "method": "sock_impl_set_options", 00:19:53.413 "params": { 00:19:53.413 "impl_name": "ssl", 00:19:53.413 "recv_buf_size": 4096, 00:19:53.413 "send_buf_size": 4096, 00:19:53.413 "enable_recv_pipe": true, 00:19:53.413 "enable_quickack": false, 00:19:53.413 "enable_placement_id": 0, 00:19:53.413 "enable_zerocopy_send_server": true, 00:19:53.413 "enable_zerocopy_send_client": false, 00:19:53.413 "zerocopy_threshold": 0, 00:19:53.413 "tls_version": 0, 00:19:53.413 "enable_ktls": false 00:19:53.413 } 00:19:53.413 }, 00:19:53.413 { 00:19:53.413 "method": "sock_impl_set_options", 00:19:53.413 "params": { 00:19:53.413 "impl_name": "posix", 00:19:53.413 "recv_buf_size": 2097152, 00:19:53.413 "send_buf_size": 2097152, 00:19:53.413 "enable_recv_pipe": true, 00:19:53.413 "enable_quickack": false, 00:19:53.413 "enable_placement_id": 0, 00:19:53.413 "enable_zerocopy_send_server": true, 00:19:53.413 "enable_zerocopy_send_client": false, 00:19:53.413 "zerocopy_threshold": 0, 00:19:53.413 "tls_version": 0, 00:19:53.413 "enable_ktls": false 00:19:53.413 } 00:19:53.413 }, 00:19:53.413 { 00:19:53.413 "method": "sock_impl_set_options", 00:19:53.413 "params": { 00:19:53.413 "impl_name": "uring", 00:19:53.413 "recv_buf_size": 2097152, 00:19:53.413 "send_buf_size": 2097152, 00:19:53.413 "enable_recv_pipe": true, 00:19:53.413 "enable_quickack": false, 00:19:53.413 "enable_placement_id": 0, 00:19:53.413 "enable_zerocopy_send_server": false, 00:19:53.413 "enable_zerocopy_send_client": false, 00:19:53.413 "zerocopy_threshold": 0, 00:19:53.413 "tls_version": 0, 00:19:53.413 "enable_ktls": false 00:19:53.413 } 00:19:53.413 } 00:19:53.413 ] 00:19:53.413 }, 00:19:53.413 { 00:19:53.413 "subsystem": "vmd", 00:19:53.413 "config": [] 00:19:53.413 }, 00:19:53.413 { 00:19:53.413 "subsystem": "accel", 00:19:53.413 "config": [ 00:19:53.413 { 00:19:53.413 "method": "accel_set_options", 00:19:53.413 "params": { 00:19:53.413 "small_cache_size": 128, 00:19:53.413 "large_cache_size": 16, 00:19:53.413 "task_count": 2048, 00:19:53.413 "sequence_count": 2048, 00:19:53.413 "buf_count": 2048 00:19:53.413 } 00:19:53.413 } 00:19:53.413 ] 00:19:53.413 }, 00:19:53.413 { 00:19:53.413 "subsystem": "bdev", 00:19:53.413 "config": [ 00:19:53.413 { 00:19:53.413 "method": "bdev_set_options", 00:19:53.413 "params": { 00:19:53.413 "bdev_io_pool_size": 65535, 00:19:53.413 "bdev_io_cache_size": 256, 00:19:53.413 "bdev_auto_examine": true, 00:19:53.413 "iobuf_small_cache_size": 128, 00:19:53.413 "iobuf_large_cache_size": 16 00:19:53.413 } 00:19:53.413 }, 00:19:53.413 { 00:19:53.413 "method": "bdev_raid_set_options", 00:19:53.413 "params": { 00:19:53.413 "process_window_size_kb": 1024, 00:19:53.413 "process_max_bandwidth_mb_sec": 0 00:19:53.413 } 00:19:53.413 }, 00:19:53.413 { 00:19:53.413 "method": "bdev_iscsi_set_options", 00:19:53.413 "params": { 00:19:53.413 "timeout_sec": 30 00:19:53.413 } 00:19:53.413 }, 00:19:53.413 { 00:19:53.413 "method": "bdev_nvme_set_options", 00:19:53.413 "params": { 00:19:53.413 "action_on_timeout": "none", 00:19:53.413 "timeout_us": 0, 00:19:53.413 "timeout_admin_us": 0, 00:19:53.413 "keep_alive_timeout_ms": 10000, 00:19:53.413 "arbitration_burst": 0, 00:19:53.413 "low_priority_weight": 0, 00:19:53.413 "medium_priority_weight": 0, 00:19:53.413 "high_priority_weight": 0, 00:19:53.413 "nvme_adminq_poll_period_us": 10000, 00:19:53.413 "nvme_ioq_poll_period_us": 0, 00:19:53.413 "io_queue_requests": 0, 00:19:53.413 "delay_cmd_submit": true, 00:19:53.413 "transport_retry_count": 4, 00:19:53.413 "bdev_retry_count": 3, 00:19:53.413 "transport_ack_timeout": 0, 00:19:53.413 "ctrlr_loss_timeout_sec": 0, 00:19:53.413 "reconnect_delay_sec": 0, 00:19:53.413 "fast_io_fail_timeout_sec": 0, 00:19:53.413 "disable_auto_failback": false, 00:19:53.413 "generate_uuids": false, 00:19:53.413 "transport_tos": 0, 00:19:53.413 "nvme_error_stat": false, 00:19:53.413 "rdma_srq_size": 0, 00:19:53.413 "io_path_stat": false, 00:19:53.413 "allow_accel_sequence": false, 00:19:53.413 "rdma_max_cq_size": 0, 00:19:53.413 "rdma_cm_event_timeout_ms": 0, 00:19:53.413 "dhchap_digests": [ 00:19:53.413 "sha256", 00:19:53.414 "sha384", 00:19:53.414 "sha512" 00:19:53.414 ], 00:19:53.414 "dhchap_dhgroups": [ 00:19:53.414 "null", 00:19:53.414 "ffdhe2048", 00:19:53.414 "ffdhe3072", 00:19:53.414 "ffdhe4096", 00:19:53.414 "ffdhe6144", 00:19:53.414 "ffdhe8192" 00:19:53.414 ] 00:19:53.414 } 00:19:53.414 }, 00:19:53.414 { 00:19:53.414 "method": "bdev_nvme_set_hotplug", 00:19:53.414 "params": { 00:19:53.414 "period_us": 100000, 00:19:53.414 "enable": false 00:19:53.414 } 00:19:53.414 }, 00:19:53.414 { 00:19:53.414 "method": "bdev_malloc_create", 00:19:53.414 "params": { 00:19:53.414 "name": "malloc0", 00:19:53.414 "num_blocks": 8192, 00:19:53.414 "block_size": 4096, 00:19:53.414 "physical_block_size": 4096, 00:19:53.414 "uuid": "f597680e-65e6-4b93-83ef-d8b07ff6598a", 00:19:53.414 "optimal_io_boundary": 0, 00:19:53.414 "md_size": 0, 00:19:53.414 "dif_type": 0, 00:19:53.414 "dif_is_head_of_md": false, 00:19:53.414 "dif_pi_format": 0 00:19:53.414 } 00:19:53.414 }, 00:19:53.414 { 00:19:53.414 "method": "bdev_wait_for_examine" 00:19:53.414 } 00:19:53.414 ] 00:19:53.414 }, 00:19:53.414 { 00:19:53.414 "subsystem": "nbd", 00:19:53.414 "config": [] 00:19:53.414 }, 00:19:53.414 { 00:19:53.414 "subsystem": "scheduler", 00:19:53.414 "config": [ 00:19:53.414 { 00:19:53.414 "method": "framework_set_scheduler", 00:19:53.414 "params": { 00:19:53.414 "name": "static" 00:19:53.414 } 00:19:53.414 } 00:19:53.414 ] 00:19:53.414 }, 00:19:53.414 { 00:19:53.414 "subsystem": "nvmf", 00:19:53.414 "config": [ 00:19:53.414 { 00:19:53.414 "method": "nvmf_set_config", 00:19:53.414 "params": { 00:19:53.414 "discovery_filter": "match_any", 00:19:53.414 "admin_cmd_passthru": { 00:19:53.414 "identify_ctrlr": false 00:19:53.414 } 00:19:53.414 } 00:19:53.414 }, 00:19:53.414 { 00:19:53.414 "method": "nvmf_set_max_subsystems", 00:19:53.414 "params": { 00:19:53.414 "max_subsystems": 1024 00:19:53.414 } 00:19:53.414 }, 00:19:53.414 { 00:19:53.414 "method": "nvmf_set_crdt", 00:19:53.414 "params": { 00:19:53.414 "crdt1": 0, 00:19:53.414 "crdt2": 0, 00:19:53.414 "crdt3": 0 00:19:53.414 } 00:19:53.414 }, 00:19:53.414 { 00:19:53.414 "method": "nvmf_create_transport", 00:19:53.414 "params": { 00:19:53.414 "trtype": "TCP", 00:19:53.414 "max_queue_depth": 128, 00:19:53.414 "max_io_qpairs_per_ctrlr": 127, 00:19:53.414 "in_capsule_data_size": 4096, 00:19:53.414 "max_io_size": 131072, 00:19:53.414 "io_unit_size": 131072, 00:19:53.414 "max_aq_depth": 128, 00:19:53.414 "num_shared_buffers": 511, 00:19:53.414 "buf_cache_size": 4294967295, 00:19:53.414 "dif_insert_or_strip": false, 00:19:53.414 "zcopy": false, 00:19:53.414 "c2h_success": false, 00:19:53.414 "sock_priority": 0, 00:19:53.414 "abort_timeout_sec": 1, 00:19:53.414 "ack_timeout": 0, 00:19:53.414 "data_wr_pool_size": 0 00:19:53.414 } 00:19:53.414 }, 00:19:53.414 { 00:19:53.414 "method": "nvmf_create_subsystem", 00:19:53.414 "params": { 00:19:53.414 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:53.414 "allow_any_host": false, 00:19:53.414 "serial_number": "SPDK00000000000001", 00:19:53.414 "model_number": "SPDK bdev Controller", 00:19:53.414 "max_namespaces": 10, 00:19:53.414 "min_cntlid": 1, 00:19:53.414 "max_cntlid": 65519, 00:19:53.414 "ana_reporting": false 00:19:53.414 } 00:19:53.414 }, 00:19:53.414 { 00:19:53.414 "method": "nvmf_subsystem_add_host", 00:19:53.414 "params": { 00:19:53.414 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:53.414 "host": "nqn.2016-06.io.spdk:host1", 00:19:53.414 "psk": "/tmp/tmp.jij0jggWrV" 00:19:53.414 } 00:19:53.414 }, 00:19:53.414 { 00:19:53.414 "method": "nvmf_subsystem_add_ns", 00:19:53.414 "params": { 00:19:53.414 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:53.414 "namespace": { 00:19:53.414 "nsid": 1, 00:19:53.414 "bdev_name": "malloc0", 00:19:53.414 "nguid": "F597680E65E64B9383EFD8B07FF6598A", 00:19:53.414 "uuid": "f597680e-65e6-4b93-83ef-d8b07ff6598a", 00:19:53.414 "no_auto_visible": false 00:19:53.414 } 00:19:53.414 } 00:19:53.414 }, 00:19:53.414 { 00:19:53.414 "method": "nvmf_subsystem_add_listener", 00:19:53.414 "params": { 00:19:53.414 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:53.414 "listen_address": { 00:19:53.414 "trtype": "TCP", 00:19:53.414 "adrfam": "IPv4", 00:19:53.414 "traddr": "10.0.0.2", 00:19:53.414 "trsvcid": "4420" 00:19:53.414 }, 00:19:53.414 "secure_channel": true 00:19:53.414 } 00:19:53.414 } 00:19:53.414 ] 00:19:53.414 } 00:19:53.414 ] 00:19:53.414 }' 00:19:53.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:53.414 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=77656 00:19:53.414 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:53.414 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 77656 00:19:53.414 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 77656 ']' 00:19:53.414 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:53.414 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:53.414 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:53.414 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:53.414 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.673 [2024-07-22 18:27:05.458001] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:53.673 [2024-07-22 18:27:05.458165] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:53.673 [2024-07-22 18:27:05.624345] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.931 [2024-07-22 18:27:05.869717] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:53.931 [2024-07-22 18:27:05.869814] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:53.931 [2024-07-22 18:27:05.869832] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:53.931 [2024-07-22 18:27:05.869847] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:53.931 [2024-07-22 18:27:05.869859] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:53.931 [2024-07-22 18:27:05.870015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:54.190 [2024-07-22 18:27:06.197493] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:54.448 [2024-07-22 18:27:06.372370] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:54.448 [2024-07-22 18:27:06.400103] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:54.448 [2024-07-22 18:27:06.416078] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:54.448 [2024-07-22 18:27:06.416433] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:54.448 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:54.448 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:54.448 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:54.448 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:54.448 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:54.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:54.721 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:54.721 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=77688 00:19:54.721 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 77688 /var/tmp/bdevperf.sock 00:19:54.721 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 77688 ']' 00:19:54.721 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:54.721 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:54.721 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:54.721 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:54.721 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:54.721 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:54.721 18:27:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:19:54.721 "subsystems": [ 00:19:54.721 { 00:19:54.721 "subsystem": "keyring", 00:19:54.721 "config": [] 00:19:54.721 }, 00:19:54.721 { 00:19:54.721 "subsystem": "iobuf", 00:19:54.721 "config": [ 00:19:54.721 { 00:19:54.721 "method": "iobuf_set_options", 00:19:54.721 "params": { 00:19:54.721 "small_pool_count": 8192, 00:19:54.721 "large_pool_count": 1024, 00:19:54.721 "small_bufsize": 8192, 00:19:54.721 "large_bufsize": 135168 00:19:54.721 } 00:19:54.721 } 00:19:54.721 ] 00:19:54.721 }, 00:19:54.721 { 00:19:54.721 "subsystem": "sock", 00:19:54.721 "config": [ 00:19:54.721 { 00:19:54.721 "method": "sock_set_default_impl", 00:19:54.721 "params": { 00:19:54.721 "impl_name": "uring" 00:19:54.721 } 00:19:54.721 }, 00:19:54.721 { 00:19:54.721 "method": "sock_impl_set_options", 00:19:54.721 "params": { 00:19:54.721 "impl_name": "ssl", 00:19:54.721 "recv_buf_size": 4096, 00:19:54.721 "send_buf_size": 4096, 00:19:54.721 "enable_recv_pipe": true, 00:19:54.721 "enable_quickack": false, 00:19:54.721 "enable_placement_id": 0, 00:19:54.721 "enable_zerocopy_send_server": true, 00:19:54.721 "enable_zerocopy_send_client": false, 00:19:54.721 "zerocopy_threshold": 0, 00:19:54.721 "tls_version": 0, 00:19:54.721 "enable_ktls": false 00:19:54.721 } 00:19:54.721 }, 00:19:54.721 { 00:19:54.721 "method": "sock_impl_set_options", 00:19:54.721 "params": { 00:19:54.721 "impl_name": "posix", 00:19:54.721 "recv_buf_size": 2097152, 00:19:54.721 "send_buf_size": 2097152, 00:19:54.721 "enable_recv_pipe": true, 00:19:54.721 "enable_quickack": false, 00:19:54.721 "enable_placement_id": 0, 00:19:54.721 "enable_zerocopy_send_server": true, 00:19:54.721 "enable_zerocopy_send_client": false, 00:19:54.721 "zerocopy_threshold": 0, 00:19:54.721 "tls_version": 0, 00:19:54.721 "enable_ktls": false 00:19:54.721 } 00:19:54.721 }, 00:19:54.721 { 00:19:54.721 "method": "sock_impl_set_options", 00:19:54.721 "params": { 00:19:54.721 "impl_name": "uring", 00:19:54.721 "recv_buf_size": 2097152, 00:19:54.721 "send_buf_size": 2097152, 00:19:54.721 "enable_recv_pipe": true, 00:19:54.721 "enable_quickack": false, 00:19:54.721 "enable_placement_id": 0, 00:19:54.721 "enable_zerocopy_send_server": false, 00:19:54.721 "enable_zerocopy_send_client": false, 00:19:54.721 "zerocopy_threshold": 0, 00:19:54.721 "tls_version": 0, 00:19:54.721 "enable_ktls": false 00:19:54.721 } 00:19:54.721 } 00:19:54.721 ] 00:19:54.721 }, 00:19:54.721 { 00:19:54.721 "subsystem": "vmd", 00:19:54.721 "config": [] 00:19:54.721 }, 00:19:54.721 { 00:19:54.721 "subsystem": "accel", 00:19:54.721 "config": [ 00:19:54.721 { 00:19:54.721 "method": "accel_set_options", 00:19:54.721 "params": { 00:19:54.721 "small_cache_size": 128, 00:19:54.721 "large_cache_size": 16, 00:19:54.721 "task_count": 2048, 00:19:54.721 "sequence_count": 2048, 00:19:54.721 "buf_count": 2048 00:19:54.721 } 00:19:54.721 } 00:19:54.721 ] 00:19:54.721 }, 00:19:54.721 { 00:19:54.721 "subsystem": "bdev", 00:19:54.721 "config": [ 00:19:54.721 { 00:19:54.721 "method": "bdev_set_options", 00:19:54.721 "params": { 00:19:54.721 "bdev_io_pool_size": 65535, 00:19:54.721 "bdev_io_cache_size": 256, 00:19:54.721 "bdev_auto_examine": true, 00:19:54.721 "iobuf_small_cache_size": 128, 00:19:54.721 "iobuf_large_cache_size": 16 00:19:54.721 } 00:19:54.721 }, 00:19:54.721 { 00:19:54.721 "method": "bdev_raid_set_options", 00:19:54.721 "params": { 00:19:54.721 "process_window_size_kb": 1024, 00:19:54.721 "process_max_bandwidth_mb_sec": 0 00:19:54.721 } 00:19:54.721 }, 00:19:54.721 { 00:19:54.721 "method": "bdev_iscsi_set_options", 00:19:54.721 "params": { 00:19:54.721 "timeout_sec": 30 00:19:54.721 } 00:19:54.721 }, 00:19:54.721 { 00:19:54.721 "method": "bdev_nvme_set_options", 00:19:54.721 "params": { 00:19:54.721 "action_on_timeout": "none", 00:19:54.721 "timeout_us": 0, 00:19:54.721 "timeout_admin_us": 0, 00:19:54.721 "keep_alive_timeout_ms": 10000, 00:19:54.721 "arbitration_burst": 0, 00:19:54.721 "low_priority_weight": 0, 00:19:54.721 "medium_priority_weight": 0, 00:19:54.721 "high_priority_weight": 0, 00:19:54.721 "nvme_adminq_poll_period_us": 10000, 00:19:54.721 "nvme_ioq_poll_period_us": 0, 00:19:54.721 "io_queue_requests": 512, 00:19:54.721 "delay_cmd_submit": true, 00:19:54.721 "transport_retry_count": 4, 00:19:54.721 "bdev_retry_count": 3, 00:19:54.721 "transport_ack_timeout": 0, 00:19:54.721 "ctrlr_loss_timeout_sec": 0, 00:19:54.721 "reconnect_delay_sec": 0, 00:19:54.721 "fast_io_fail_timeout_sec": 0, 00:19:54.721 "disable_auto_failback": false, 00:19:54.721 "generate_uuids": false, 00:19:54.721 "transport_tos": 0, 00:19:54.721 "nvme_error_stat": false, 00:19:54.721 "rdma_srq_size": 0, 00:19:54.721 "io_path_stat": false, 00:19:54.721 "allow_accel_sequence": false, 00:19:54.721 "rdma_max_cq_size": 0, 00:19:54.721 "rdma_cm_event_timeout_ms": 0, 00:19:54.721 "dhchap_digests": [ 00:19:54.721 "sha256", 00:19:54.721 "sha384", 00:19:54.721 "sha512" 00:19:54.721 ], 00:19:54.721 "dhchap_dhgroups": [ 00:19:54.721 "null", 00:19:54.721 "ffdhe2048", 00:19:54.721 "ffdhe3072", 00:19:54.721 "ffdhe4096", 00:19:54.721 "ffdhe6144", 00:19:54.721 "ffdhe8192" 00:19:54.721 ] 00:19:54.721 } 00:19:54.721 }, 00:19:54.721 { 00:19:54.721 "method": "bdev_nvme_attach_controller", 00:19:54.721 "params": { 00:19:54.721 "name": "TLSTEST", 00:19:54.721 "trtype": "TCP", 00:19:54.721 "adrfam": "IPv4", 00:19:54.721 "traddr": "10.0.0.2", 00:19:54.721 "trsvcid": "4420", 00:19:54.721 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:54.721 "prchk_reftag": false, 00:19:54.721 "prchk_guard": false, 00:19:54.721 "ctrlr_loss_timeout_sec": 0, 00:19:54.721 "reconnect_delay_sec": 0, 00:19:54.721 "fast_io_fail_timeout_sec": 0, 00:19:54.721 "psk": "/tmp/tmp.jij0jggWrV", 00:19:54.721 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:54.721 "hdgst": false, 00:19:54.721 "ddgst": false 00:19:54.721 } 00:19:54.721 }, 00:19:54.721 { 00:19:54.721 "method": "bdev_nvme_set_hotplug", 00:19:54.721 "params": { 00:19:54.721 "period_us": 100000, 00:19:54.721 "enable": false 00:19:54.721 } 00:19:54.721 }, 00:19:54.721 { 00:19:54.721 "method": "bdev_wait_for_examine" 00:19:54.721 } 00:19:54.721 ] 00:19:54.721 }, 00:19:54.721 { 00:19:54.721 "subsystem": "nbd", 00:19:54.721 "config": [] 00:19:54.721 } 00:19:54.721 ] 00:19:54.721 }' 00:19:54.721 [2024-07-22 18:27:06.585054] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:54.722 [2024-07-22 18:27:06.585497] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77688 ] 00:19:54.980 [2024-07-22 18:27:06.755131] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.238 [2024-07-22 18:27:07.045210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:55.497 [2024-07-22 18:27:07.336761] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:55.497 [2024-07-22 18:27:07.454026] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:55.497 [2024-07-22 18:27:07.454395] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:55.755 18:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:55.755 18:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:55.755 18:27:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:55.755 Running I/O for 10 seconds... 00:20:05.726 00:20:05.726 Latency(us) 00:20:05.726 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:05.726 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:05.726 Verification LBA range: start 0x0 length 0x2000 00:20:05.726 TLSTESTn1 : 10.02 2828.77 11.05 0.00 0.00 45155.79 9115.46 49330.73 00:20:05.726 =================================================================================================================== 00:20:05.726 Total : 2828.77 11.05 0.00 0.00 45155.79 9115.46 49330.73 00:20:05.984 0 00:20:05.984 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:05.984 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 77688 00:20:05.984 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 77688 ']' 00:20:05.984 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 77688 00:20:05.984 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:05.984 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:05.984 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77688 00:20:05.984 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:05.984 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:05.984 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77688' 00:20:05.984 killing process with pid 77688 00:20:05.984 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 77688 00:20:05.984 Received shutdown signal, test time was about 10.000000 seconds 00:20:05.984 00:20:05.984 Latency(us) 00:20:05.984 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:05.984 =================================================================================================================== 00:20:05.984 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:05.984 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 77688 00:20:05.984 [2024-07-22 18:27:17.783498] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:07.360 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 77656 00:20:07.360 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 77656 ']' 00:20:07.360 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 77656 00:20:07.360 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:07.360 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:07.360 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77656 00:20:07.360 killing process with pid 77656 00:20:07.360 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:07.360 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:07.360 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77656' 00:20:07.360 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 77656 00:20:07.360 [2024-07-22 18:27:19.028709] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:07.360 18:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 77656 00:20:08.738 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:20:08.738 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:08.738 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:08.738 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:08.738 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=77840 00:20:08.738 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:08.738 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 77840 00:20:08.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:08.738 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 77840 ']' 00:20:08.738 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.738 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:08.738 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.738 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:08.738 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:08.738 [2024-07-22 18:27:20.521172] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:20:08.738 [2024-07-22 18:27:20.521655] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:08.738 [2024-07-22 18:27:20.705435] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.996 [2024-07-22 18:27:20.969699] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:08.996 [2024-07-22 18:27:20.969773] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:08.996 [2024-07-22 18:27:20.969806] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:08.996 [2024-07-22 18:27:20.969823] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:08.996 [2024-07-22 18:27:20.969835] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:08.996 [2024-07-22 18:27:20.969888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.255 [2024-07-22 18:27:21.200951] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:09.512 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:09.512 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:09.512 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:09.512 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:09.512 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:09.512 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:09.512 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.jij0jggWrV 00:20:09.512 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.jij0jggWrV 00:20:09.512 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:09.770 [2024-07-22 18:27:21.737970] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:09.770 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:10.028 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:10.287 [2024-07-22 18:27:22.238124] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:10.287 [2024-07-22 18:27:22.238432] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:10.287 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:10.548 malloc0 00:20:10.808 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:10.808 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.jij0jggWrV 00:20:11.066 [2024-07-22 18:27:23.021899] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:11.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:11.067 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=77899 00:20:11.067 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:11.067 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:11.067 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 77899 /var/tmp/bdevperf.sock 00:20:11.067 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 77899 ']' 00:20:11.067 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:11.067 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:11.067 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:11.067 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:11.067 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:11.325 [2024-07-22 18:27:23.147068] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:20:11.325 [2024-07-22 18:27:23.147549] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77899 ] 00:20:11.325 [2024-07-22 18:27:23.322996] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.584 [2024-07-22 18:27:23.566442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:11.842 [2024-07-22 18:27:23.769943] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:12.101 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:12.101 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:12.101 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.jij0jggWrV 00:20:12.359 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:12.622 [2024-07-22 18:27:24.571692] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:12.894 nvme0n1 00:20:12.894 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:12.894 Running I/O for 1 seconds... 00:20:14.271 00:20:14.271 Latency(us) 00:20:14.271 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:14.271 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:14.271 Verification LBA range: start 0x0 length 0x2000 00:20:14.271 nvme0n1 : 1.04 2810.44 10.98 0.00 0.00 44808.28 8519.68 26691.03 00:20:14.271 =================================================================================================================== 00:20:14.271 Total : 2810.44 10.98 0.00 0.00 44808.28 8519.68 26691.03 00:20:14.271 0 00:20:14.271 18:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 77899 00:20:14.271 18:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 77899 ']' 00:20:14.271 18:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 77899 00:20:14.271 18:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:14.271 18:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:14.271 18:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77899 00:20:14.271 killing process with pid 77899 00:20:14.271 Received shutdown signal, test time was about 1.000000 seconds 00:20:14.271 00:20:14.271 Latency(us) 00:20:14.271 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:14.271 =================================================================================================================== 00:20:14.271 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:14.271 18:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:14.271 18:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:14.271 18:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77899' 00:20:14.271 18:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 77899 00:20:14.271 18:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 77899 00:20:15.207 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 77840 00:20:15.207 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 77840 ']' 00:20:15.207 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 77840 00:20:15.207 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:15.207 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:15.207 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77840 00:20:15.207 killing process with pid 77840 00:20:15.207 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:15.207 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:15.207 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77840' 00:20:15.207 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 77840 00:20:15.207 [2024-07-22 18:27:27.026153] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:15.207 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 77840 00:20:16.584 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:20:16.584 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:16.584 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:16.584 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:16.585 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=77970 00:20:16.585 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:16.585 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 77970 00:20:16.585 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 77970 ']' 00:20:16.585 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:16.585 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:16.585 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:16.585 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:16.585 18:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.585 [2024-07-22 18:27:28.572496] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:20:16.585 [2024-07-22 18:27:28.572864] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:16.842 [2024-07-22 18:27:28.740581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.100 [2024-07-22 18:27:28.983148] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:17.100 [2024-07-22 18:27:28.983511] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:17.100 [2024-07-22 18:27:28.983673] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:17.100 [2024-07-22 18:27:28.983814] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:17.100 [2024-07-22 18:27:28.983864] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:17.100 [2024-07-22 18:27:28.984010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.357 [2024-07-22 18:27:29.191117] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:17.615 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:17.615 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:17.615 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:17.615 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:17.615 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:17.615 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:17.615 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:20:17.615 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.615 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:17.615 [2024-07-22 18:27:29.490383] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:17.615 malloc0 00:20:17.615 [2024-07-22 18:27:29.557233] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:17.616 [2024-07-22 18:27:29.557512] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:17.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:17.616 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.616 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=78002 00:20:17.616 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:17.616 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 78002 /var/tmp/bdevperf.sock 00:20:17.616 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 78002 ']' 00:20:17.616 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:17.616 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:17.616 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:17.616 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:17.616 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:17.874 [2024-07-22 18:27:29.679197] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:20:17.874 [2024-07-22 18:27:29.679641] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78002 ] 00:20:17.874 [2024-07-22 18:27:29.849072] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.132 [2024-07-22 18:27:30.121511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:18.390 [2024-07-22 18:27:30.333505] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:18.647 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:18.647 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:18.647 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.jij0jggWrV 00:20:18.904 18:27:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:19.160 [2024-07-22 18:27:31.015123] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:19.160 nvme0n1 00:20:19.160 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:19.418 Running I/O for 1 seconds... 00:20:20.355 00:20:20.355 Latency(us) 00:20:20.355 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:20.355 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:20.355 Verification LBA range: start 0x0 length 0x2000 00:20:20.355 nvme0n1 : 1.04 2664.24 10.41 0.00 0.00 47124.42 9353.77 28955.00 00:20:20.355 =================================================================================================================== 00:20:20.355 Total : 2664.24 10.41 0.00 0.00 47124.42 9353.77 28955.00 00:20:20.355 0 00:20:20.355 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:20:20.355 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.355 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:20.613 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.613 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:20:20.613 "subsystems": [ 00:20:20.613 { 00:20:20.613 "subsystem": "keyring", 00:20:20.613 "config": [ 00:20:20.613 { 00:20:20.613 "method": "keyring_file_add_key", 00:20:20.613 "params": { 00:20:20.613 "name": "key0", 00:20:20.613 "path": "/tmp/tmp.jij0jggWrV" 00:20:20.613 } 00:20:20.613 } 00:20:20.613 ] 00:20:20.613 }, 00:20:20.613 { 00:20:20.613 "subsystem": "iobuf", 00:20:20.613 "config": [ 00:20:20.613 { 00:20:20.613 "method": "iobuf_set_options", 00:20:20.613 "params": { 00:20:20.613 "small_pool_count": 8192, 00:20:20.613 "large_pool_count": 1024, 00:20:20.613 "small_bufsize": 8192, 00:20:20.613 "large_bufsize": 135168 00:20:20.613 } 00:20:20.613 } 00:20:20.613 ] 00:20:20.613 }, 00:20:20.613 { 00:20:20.613 "subsystem": "sock", 00:20:20.613 "config": [ 00:20:20.613 { 00:20:20.613 "method": "sock_set_default_impl", 00:20:20.613 "params": { 00:20:20.613 "impl_name": "uring" 00:20:20.613 } 00:20:20.613 }, 00:20:20.613 { 00:20:20.613 "method": "sock_impl_set_options", 00:20:20.613 "params": { 00:20:20.613 "impl_name": "ssl", 00:20:20.613 "recv_buf_size": 4096, 00:20:20.613 "send_buf_size": 4096, 00:20:20.613 "enable_recv_pipe": true, 00:20:20.613 "enable_quickack": false, 00:20:20.613 "enable_placement_id": 0, 00:20:20.613 "enable_zerocopy_send_server": true, 00:20:20.613 "enable_zerocopy_send_client": false, 00:20:20.613 "zerocopy_threshold": 0, 00:20:20.613 "tls_version": 0, 00:20:20.613 "enable_ktls": false 00:20:20.613 } 00:20:20.613 }, 00:20:20.613 { 00:20:20.613 "method": "sock_impl_set_options", 00:20:20.613 "params": { 00:20:20.613 "impl_name": "posix", 00:20:20.613 "recv_buf_size": 2097152, 00:20:20.613 "send_buf_size": 2097152, 00:20:20.613 "enable_recv_pipe": true, 00:20:20.613 "enable_quickack": false, 00:20:20.613 "enable_placement_id": 0, 00:20:20.613 "enable_zerocopy_send_server": true, 00:20:20.613 "enable_zerocopy_send_client": false, 00:20:20.613 "zerocopy_threshold": 0, 00:20:20.613 "tls_version": 0, 00:20:20.613 "enable_ktls": false 00:20:20.613 } 00:20:20.613 }, 00:20:20.613 { 00:20:20.613 "method": "sock_impl_set_options", 00:20:20.613 "params": { 00:20:20.613 "impl_name": "uring", 00:20:20.613 "recv_buf_size": 2097152, 00:20:20.613 "send_buf_size": 2097152, 00:20:20.613 "enable_recv_pipe": true, 00:20:20.613 "enable_quickack": false, 00:20:20.613 "enable_placement_id": 0, 00:20:20.613 "enable_zerocopy_send_server": false, 00:20:20.613 "enable_zerocopy_send_client": false, 00:20:20.613 "zerocopy_threshold": 0, 00:20:20.613 "tls_version": 0, 00:20:20.613 "enable_ktls": false 00:20:20.613 } 00:20:20.613 } 00:20:20.613 ] 00:20:20.613 }, 00:20:20.613 { 00:20:20.613 "subsystem": "vmd", 00:20:20.613 "config": [] 00:20:20.613 }, 00:20:20.613 { 00:20:20.613 "subsystem": "accel", 00:20:20.613 "config": [ 00:20:20.613 { 00:20:20.613 "method": "accel_set_options", 00:20:20.613 "params": { 00:20:20.613 "small_cache_size": 128, 00:20:20.613 "large_cache_size": 16, 00:20:20.613 "task_count": 2048, 00:20:20.613 "sequence_count": 2048, 00:20:20.613 "buf_count": 2048 00:20:20.613 } 00:20:20.613 } 00:20:20.613 ] 00:20:20.613 }, 00:20:20.613 { 00:20:20.613 "subsystem": "bdev", 00:20:20.613 "config": [ 00:20:20.613 { 00:20:20.614 "method": "bdev_set_options", 00:20:20.614 "params": { 00:20:20.614 "bdev_io_pool_size": 65535, 00:20:20.614 "bdev_io_cache_size": 256, 00:20:20.614 "bdev_auto_examine": true, 00:20:20.614 "iobuf_small_cache_size": 128, 00:20:20.614 "iobuf_large_cache_size": 16 00:20:20.614 } 00:20:20.614 }, 00:20:20.614 { 00:20:20.614 "method": "bdev_raid_set_options", 00:20:20.614 "params": { 00:20:20.614 "process_window_size_kb": 1024, 00:20:20.614 "process_max_bandwidth_mb_sec": 0 00:20:20.614 } 00:20:20.614 }, 00:20:20.614 { 00:20:20.614 "method": "bdev_iscsi_set_options", 00:20:20.614 "params": { 00:20:20.614 "timeout_sec": 30 00:20:20.614 } 00:20:20.614 }, 00:20:20.614 { 00:20:20.614 "method": "bdev_nvme_set_options", 00:20:20.614 "params": { 00:20:20.614 "action_on_timeout": "none", 00:20:20.614 "timeout_us": 0, 00:20:20.614 "timeout_admin_us": 0, 00:20:20.614 "keep_alive_timeout_ms": 10000, 00:20:20.614 "arbitration_burst": 0, 00:20:20.614 "low_priority_weight": 0, 00:20:20.614 "medium_priority_weight": 0, 00:20:20.614 "high_priority_weight": 0, 00:20:20.614 "nvme_adminq_poll_period_us": 10000, 00:20:20.614 "nvme_ioq_poll_period_us": 0, 00:20:20.614 "io_queue_requests": 0, 00:20:20.614 "delay_cmd_submit": true, 00:20:20.614 "transport_retry_count": 4, 00:20:20.614 "bdev_retry_count": 3, 00:20:20.614 "transport_ack_timeout": 0, 00:20:20.614 "ctrlr_loss_timeout_sec": 0, 00:20:20.614 "reconnect_delay_sec": 0, 00:20:20.614 "fast_io_fail_timeout_sec": 0, 00:20:20.614 "disable_auto_failback": false, 00:20:20.614 "generate_uuids": false, 00:20:20.614 "transport_tos": 0, 00:20:20.614 "nvme_error_stat": false, 00:20:20.614 "rdma_srq_size": 0, 00:20:20.614 "io_path_stat": false, 00:20:20.614 "allow_accel_sequence": false, 00:20:20.614 "rdma_max_cq_size": 0, 00:20:20.614 "rdma_cm_event_timeout_ms": 0, 00:20:20.614 "dhchap_digests": [ 00:20:20.614 "sha256", 00:20:20.614 "sha384", 00:20:20.614 "sha512" 00:20:20.614 ], 00:20:20.614 "dhchap_dhgroups": [ 00:20:20.614 "null", 00:20:20.614 "ffdhe2048", 00:20:20.614 "ffdhe3072", 00:20:20.614 "ffdhe4096", 00:20:20.614 "ffdhe6144", 00:20:20.614 "ffdhe8192" 00:20:20.614 ] 00:20:20.614 } 00:20:20.614 }, 00:20:20.614 { 00:20:20.614 "method": "bdev_nvme_set_hotplug", 00:20:20.614 "params": { 00:20:20.614 "period_us": 100000, 00:20:20.614 "enable": false 00:20:20.614 } 00:20:20.614 }, 00:20:20.614 { 00:20:20.614 "method": "bdev_malloc_create", 00:20:20.614 "params": { 00:20:20.614 "name": "malloc0", 00:20:20.614 "num_blocks": 8192, 00:20:20.614 "block_size": 4096, 00:20:20.614 "physical_block_size": 4096, 00:20:20.614 "uuid": "4773a469-0141-4953-93f2-78fef8ef66e3", 00:20:20.614 "optimal_io_boundary": 0, 00:20:20.614 "md_size": 0, 00:20:20.614 "dif_type": 0, 00:20:20.614 "dif_is_head_of_md": false, 00:20:20.614 "dif_pi_format": 0 00:20:20.614 } 00:20:20.614 }, 00:20:20.614 { 00:20:20.614 "method": "bdev_wait_for_examine" 00:20:20.614 } 00:20:20.614 ] 00:20:20.614 }, 00:20:20.614 { 00:20:20.614 "subsystem": "nbd", 00:20:20.614 "config": [] 00:20:20.614 }, 00:20:20.614 { 00:20:20.614 "subsystem": "scheduler", 00:20:20.614 "config": [ 00:20:20.614 { 00:20:20.614 "method": "framework_set_scheduler", 00:20:20.614 "params": { 00:20:20.614 "name": "static" 00:20:20.614 } 00:20:20.614 } 00:20:20.614 ] 00:20:20.614 }, 00:20:20.614 { 00:20:20.614 "subsystem": "nvmf", 00:20:20.614 "config": [ 00:20:20.614 { 00:20:20.614 "method": "nvmf_set_config", 00:20:20.614 "params": { 00:20:20.614 "discovery_filter": "match_any", 00:20:20.614 "admin_cmd_passthru": { 00:20:20.614 "identify_ctrlr": false 00:20:20.614 } 00:20:20.614 } 00:20:20.614 }, 00:20:20.614 { 00:20:20.614 "method": "nvmf_set_max_subsystems", 00:20:20.614 "params": { 00:20:20.614 "max_subsystems": 1024 00:20:20.614 } 00:20:20.614 }, 00:20:20.614 { 00:20:20.614 "method": "nvmf_set_crdt", 00:20:20.614 "params": { 00:20:20.614 "crdt1": 0, 00:20:20.614 "crdt2": 0, 00:20:20.614 "crdt3": 0 00:20:20.614 } 00:20:20.614 }, 00:20:20.614 { 00:20:20.614 "method": "nvmf_create_transport", 00:20:20.614 "params": { 00:20:20.614 "trtype": "TCP", 00:20:20.614 "max_queue_depth": 128, 00:20:20.614 "max_io_qpairs_per_ctrlr": 127, 00:20:20.614 "in_capsule_data_size": 4096, 00:20:20.614 "max_io_size": 131072, 00:20:20.614 "io_unit_size": 131072, 00:20:20.614 "max_aq_depth": 128, 00:20:20.614 "num_shared_buffers": 511, 00:20:20.614 "buf_cache_size": 4294967295, 00:20:20.614 "dif_insert_or_strip": false, 00:20:20.614 "zcopy": false, 00:20:20.614 "c2h_success": false, 00:20:20.614 "sock_priority": 0, 00:20:20.614 "abort_timeout_sec": 1, 00:20:20.614 "ack_timeout": 0, 00:20:20.614 "data_wr_pool_size": 0 00:20:20.614 } 00:20:20.614 }, 00:20:20.614 { 00:20:20.614 "method": "nvmf_create_subsystem", 00:20:20.614 "params": { 00:20:20.614 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.614 "allow_any_host": false, 00:20:20.614 "serial_number": "00000000000000000000", 00:20:20.614 "model_number": "SPDK bdev Controller", 00:20:20.614 "max_namespaces": 32, 00:20:20.614 "min_cntlid": 1, 00:20:20.614 "max_cntlid": 65519, 00:20:20.614 "ana_reporting": false 00:20:20.614 } 00:20:20.614 }, 00:20:20.614 { 00:20:20.614 "method": "nvmf_subsystem_add_host", 00:20:20.614 "params": { 00:20:20.614 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.614 "host": "nqn.2016-06.io.spdk:host1", 00:20:20.614 "psk": "key0" 00:20:20.614 } 00:20:20.614 }, 00:20:20.614 { 00:20:20.614 "method": "nvmf_subsystem_add_ns", 00:20:20.614 "params": { 00:20:20.614 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.614 "namespace": { 00:20:20.614 "nsid": 1, 00:20:20.614 "bdev_name": "malloc0", 00:20:20.614 "nguid": "4773A4690141495393F278FEF8EF66E3", 00:20:20.614 "uuid": "4773a469-0141-4953-93f2-78fef8ef66e3", 00:20:20.614 "no_auto_visible": false 00:20:20.614 } 00:20:20.614 } 00:20:20.614 }, 00:20:20.614 { 00:20:20.614 "method": "nvmf_subsystem_add_listener", 00:20:20.614 "params": { 00:20:20.614 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.614 "listen_address": { 00:20:20.614 "trtype": "TCP", 00:20:20.614 "adrfam": "IPv4", 00:20:20.614 "traddr": "10.0.0.2", 00:20:20.614 "trsvcid": "4420" 00:20:20.614 }, 00:20:20.614 "secure_channel": false, 00:20:20.614 "sock_impl": "ssl" 00:20:20.614 } 00:20:20.614 } 00:20:20.614 ] 00:20:20.614 } 00:20:20.614 ] 00:20:20.614 }' 00:20:20.614 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:20.873 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:20:20.873 "subsystems": [ 00:20:20.873 { 00:20:20.873 "subsystem": "keyring", 00:20:20.873 "config": [ 00:20:20.873 { 00:20:20.873 "method": "keyring_file_add_key", 00:20:20.873 "params": { 00:20:20.873 "name": "key0", 00:20:20.873 "path": "/tmp/tmp.jij0jggWrV" 00:20:20.873 } 00:20:20.873 } 00:20:20.873 ] 00:20:20.873 }, 00:20:20.873 { 00:20:20.873 "subsystem": "iobuf", 00:20:20.873 "config": [ 00:20:20.873 { 00:20:20.873 "method": "iobuf_set_options", 00:20:20.873 "params": { 00:20:20.873 "small_pool_count": 8192, 00:20:20.873 "large_pool_count": 1024, 00:20:20.873 "small_bufsize": 8192, 00:20:20.873 "large_bufsize": 135168 00:20:20.873 } 00:20:20.873 } 00:20:20.873 ] 00:20:20.873 }, 00:20:20.873 { 00:20:20.873 "subsystem": "sock", 00:20:20.873 "config": [ 00:20:20.873 { 00:20:20.873 "method": "sock_set_default_impl", 00:20:20.873 "params": { 00:20:20.873 "impl_name": "uring" 00:20:20.873 } 00:20:20.873 }, 00:20:20.873 { 00:20:20.873 "method": "sock_impl_set_options", 00:20:20.873 "params": { 00:20:20.873 "impl_name": "ssl", 00:20:20.873 "recv_buf_size": 4096, 00:20:20.873 "send_buf_size": 4096, 00:20:20.873 "enable_recv_pipe": true, 00:20:20.873 "enable_quickack": false, 00:20:20.873 "enable_placement_id": 0, 00:20:20.873 "enable_zerocopy_send_server": true, 00:20:20.873 "enable_zerocopy_send_client": false, 00:20:20.873 "zerocopy_threshold": 0, 00:20:20.873 "tls_version": 0, 00:20:20.873 "enable_ktls": false 00:20:20.873 } 00:20:20.873 }, 00:20:20.873 { 00:20:20.873 "method": "sock_impl_set_options", 00:20:20.874 "params": { 00:20:20.874 "impl_name": "posix", 00:20:20.874 "recv_buf_size": 2097152, 00:20:20.874 "send_buf_size": 2097152, 00:20:20.874 "enable_recv_pipe": true, 00:20:20.874 "enable_quickack": false, 00:20:20.874 "enable_placement_id": 0, 00:20:20.874 "enable_zerocopy_send_server": true, 00:20:20.874 "enable_zerocopy_send_client": false, 00:20:20.874 "zerocopy_threshold": 0, 00:20:20.874 "tls_version": 0, 00:20:20.874 "enable_ktls": false 00:20:20.874 } 00:20:20.874 }, 00:20:20.874 { 00:20:20.874 "method": "sock_impl_set_options", 00:20:20.874 "params": { 00:20:20.874 "impl_name": "uring", 00:20:20.874 "recv_buf_size": 2097152, 00:20:20.874 "send_buf_size": 2097152, 00:20:20.874 "enable_recv_pipe": true, 00:20:20.874 "enable_quickack": false, 00:20:20.874 "enable_placement_id": 0, 00:20:20.874 "enable_zerocopy_send_server": false, 00:20:20.874 "enable_zerocopy_send_client": false, 00:20:20.874 "zerocopy_threshold": 0, 00:20:20.874 "tls_version": 0, 00:20:20.874 "enable_ktls": false 00:20:20.874 } 00:20:20.874 } 00:20:20.874 ] 00:20:20.874 }, 00:20:20.874 { 00:20:20.874 "subsystem": "vmd", 00:20:20.874 "config": [] 00:20:20.874 }, 00:20:20.874 { 00:20:20.874 "subsystem": "accel", 00:20:20.874 "config": [ 00:20:20.874 { 00:20:20.874 "method": "accel_set_options", 00:20:20.874 "params": { 00:20:20.874 "small_cache_size": 128, 00:20:20.874 "large_cache_size": 16, 00:20:20.874 "task_count": 2048, 00:20:20.874 "sequence_count": 2048, 00:20:20.874 "buf_count": 2048 00:20:20.874 } 00:20:20.874 } 00:20:20.874 ] 00:20:20.874 }, 00:20:20.874 { 00:20:20.874 "subsystem": "bdev", 00:20:20.874 "config": [ 00:20:20.874 { 00:20:20.874 "method": "bdev_set_options", 00:20:20.874 "params": { 00:20:20.874 "bdev_io_pool_size": 65535, 00:20:20.874 "bdev_io_cache_size": 256, 00:20:20.874 "bdev_auto_examine": true, 00:20:20.874 "iobuf_small_cache_size": 128, 00:20:20.874 "iobuf_large_cache_size": 16 00:20:20.874 } 00:20:20.874 }, 00:20:20.874 { 00:20:20.874 "method": "bdev_raid_set_options", 00:20:20.874 "params": { 00:20:20.874 "process_window_size_kb": 1024, 00:20:20.874 "process_max_bandwidth_mb_sec": 0 00:20:20.874 } 00:20:20.874 }, 00:20:20.874 { 00:20:20.874 "method": "bdev_iscsi_set_options", 00:20:20.874 "params": { 00:20:20.874 "timeout_sec": 30 00:20:20.874 } 00:20:20.874 }, 00:20:20.874 { 00:20:20.874 "method": "bdev_nvme_set_options", 00:20:20.874 "params": { 00:20:20.874 "action_on_timeout": "none", 00:20:20.874 "timeout_us": 0, 00:20:20.874 "timeout_admin_us": 0, 00:20:20.874 "keep_alive_timeout_ms": 10000, 00:20:20.874 "arbitration_burst": 0, 00:20:20.874 "low_priority_weight": 0, 00:20:20.874 "medium_priority_weight": 0, 00:20:20.874 "high_priority_weight": 0, 00:20:20.874 "nvme_adminq_poll_period_us": 10000, 00:20:20.874 "nvme_ioq_poll_period_us": 0, 00:20:20.874 "io_queue_requests": 512, 00:20:20.874 "delay_cmd_submit": true, 00:20:20.874 "transport_retry_count": 4, 00:20:20.874 "bdev_retry_count": 3, 00:20:20.874 "transport_ack_timeout": 0, 00:20:20.874 "ctrlr_loss_timeout_sec": 0, 00:20:20.874 "reconnect_delay_sec": 0, 00:20:20.874 "fast_io_fail_timeout_sec": 0, 00:20:20.874 "disable_auto_failback": false, 00:20:20.874 "generate_uuids": false, 00:20:20.874 "transport_tos": 0, 00:20:20.874 "nvme_error_stat": false, 00:20:20.874 "rdma_srq_size": 0, 00:20:20.874 "io_path_stat": false, 00:20:20.874 "allow_accel_sequence": false, 00:20:20.874 "rdma_max_cq_size": 0, 00:20:20.874 "rdma_cm_event_timeout_ms": 0, 00:20:20.874 "dhchap_digests": [ 00:20:20.874 "sha256", 00:20:20.874 "sha384", 00:20:20.874 "sha512" 00:20:20.874 ], 00:20:20.874 "dhchap_dhgroups": [ 00:20:20.874 "null", 00:20:20.874 "ffdhe2048", 00:20:20.874 "ffdhe3072", 00:20:20.874 "ffdhe4096", 00:20:20.874 "ffdhe6144", 00:20:20.874 "ffdhe8192" 00:20:20.874 ] 00:20:20.874 } 00:20:20.874 }, 00:20:20.874 { 00:20:20.874 "method": "bdev_nvme_attach_controller", 00:20:20.874 "params": { 00:20:20.874 "name": "nvme0", 00:20:20.874 "trtype": "TCP", 00:20:20.874 "adrfam": "IPv4", 00:20:20.874 "traddr": "10.0.0.2", 00:20:20.874 "trsvcid": "4420", 00:20:20.874 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.874 "prchk_reftag": false, 00:20:20.874 "prchk_guard": false, 00:20:20.874 "ctrlr_loss_timeout_sec": 0, 00:20:20.874 "reconnect_delay_sec": 0, 00:20:20.874 "fast_io_fail_timeout_sec": 0, 00:20:20.874 "psk": "key0", 00:20:20.874 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:20.874 "hdgst": false, 00:20:20.874 "ddgst": false 00:20:20.874 } 00:20:20.874 }, 00:20:20.874 { 00:20:20.874 "method": "bdev_nvme_set_hotplug", 00:20:20.874 "params": { 00:20:20.874 "period_us": 100000, 00:20:20.874 "enable": false 00:20:20.874 } 00:20:20.874 }, 00:20:20.874 { 00:20:20.874 "method": "bdev_enable_histogram", 00:20:20.874 "params": { 00:20:20.874 "name": "nvme0n1", 00:20:20.874 "enable": true 00:20:20.874 } 00:20:20.874 }, 00:20:20.874 { 00:20:20.874 "method": "bdev_wait_for_examine" 00:20:20.874 } 00:20:20.874 ] 00:20:20.874 }, 00:20:20.874 { 00:20:20.874 "subsystem": "nbd", 00:20:20.874 "config": [] 00:20:20.874 } 00:20:20.874 ] 00:20:20.874 }' 00:20:20.874 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 78002 00:20:20.874 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 78002 ']' 00:20:20.874 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 78002 00:20:20.874 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:20.874 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:20.874 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78002 00:20:20.874 killing process with pid 78002 00:20:20.874 Received shutdown signal, test time was about 1.000000 seconds 00:20:20.874 00:20:20.874 Latency(us) 00:20:20.874 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:20.874 =================================================================================================================== 00:20:20.874 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:20.874 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:20.874 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:20.874 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78002' 00:20:20.874 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 78002 00:20:20.874 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 78002 00:20:22.248 18:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 77970 00:20:22.248 18:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 77970 ']' 00:20:22.248 18:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 77970 00:20:22.248 18:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:22.248 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:22.248 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77970 00:20:22.248 killing process with pid 77970 00:20:22.248 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:22.248 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:22.248 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77970' 00:20:22.248 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 77970 00:20:22.248 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 77970 00:20:23.627 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:20:23.627 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:20:23.627 "subsystems": [ 00:20:23.627 { 00:20:23.627 "subsystem": "keyring", 00:20:23.627 "config": [ 00:20:23.627 { 00:20:23.627 "method": "keyring_file_add_key", 00:20:23.627 "params": { 00:20:23.627 "name": "key0", 00:20:23.627 "path": "/tmp/tmp.jij0jggWrV" 00:20:23.627 } 00:20:23.627 } 00:20:23.627 ] 00:20:23.627 }, 00:20:23.627 { 00:20:23.627 "subsystem": "iobuf", 00:20:23.627 "config": [ 00:20:23.627 { 00:20:23.627 "method": "iobuf_set_options", 00:20:23.627 "params": { 00:20:23.627 "small_pool_count": 8192, 00:20:23.627 "large_pool_count": 1024, 00:20:23.627 "small_bufsize": 8192, 00:20:23.627 "large_bufsize": 135168 00:20:23.627 } 00:20:23.627 } 00:20:23.627 ] 00:20:23.627 }, 00:20:23.627 { 00:20:23.627 "subsystem": "sock", 00:20:23.627 "config": [ 00:20:23.627 { 00:20:23.627 "method": "sock_set_default_impl", 00:20:23.627 "params": { 00:20:23.627 "impl_name": "uring" 00:20:23.627 } 00:20:23.627 }, 00:20:23.627 { 00:20:23.627 "method": "sock_impl_set_options", 00:20:23.627 "params": { 00:20:23.627 "impl_name": "ssl", 00:20:23.627 "recv_buf_size": 4096, 00:20:23.627 "send_buf_size": 4096, 00:20:23.627 "enable_recv_pipe": true, 00:20:23.627 "enable_quickack": false, 00:20:23.627 "enable_placement_id": 0, 00:20:23.627 "enable_zerocopy_send_server": true, 00:20:23.627 "enable_zerocopy_send_client": false, 00:20:23.627 "zerocopy_threshold": 0, 00:20:23.627 "tls_version": 0, 00:20:23.627 "enable_ktls": false 00:20:23.627 } 00:20:23.627 }, 00:20:23.627 { 00:20:23.627 "method": "sock_impl_set_options", 00:20:23.627 "params": { 00:20:23.627 "impl_name": "posix", 00:20:23.627 "recv_buf_size": 2097152, 00:20:23.627 "send_buf_size": 2097152, 00:20:23.627 "enable_recv_pipe": true, 00:20:23.627 "enable_quickack": false, 00:20:23.627 "enable_placement_id": 0, 00:20:23.627 "enable_zerocopy_send_server": true, 00:20:23.627 "enable_zerocopy_send_client": false, 00:20:23.627 "zerocopy_threshold": 0, 00:20:23.627 "tls_version": 0, 00:20:23.627 "enable_ktls": false 00:20:23.627 } 00:20:23.627 }, 00:20:23.627 { 00:20:23.627 "method": "sock_impl_set_options", 00:20:23.627 "params": { 00:20:23.627 "impl_name": "uring", 00:20:23.627 "recv_buf_size": 2097152, 00:20:23.627 "send_buf_size": 2097152, 00:20:23.627 "enable_recv_pipe": true, 00:20:23.627 "enable_quickack": false, 00:20:23.627 "enable_placement_id": 0, 00:20:23.627 "enable_zerocopy_send_server": false, 00:20:23.627 "enable_zerocopy_send_client": false, 00:20:23.627 "zerocopy_threshold": 0, 00:20:23.627 "tls_version": 0, 00:20:23.627 "enable_ktls": false 00:20:23.627 } 00:20:23.627 } 00:20:23.627 ] 00:20:23.627 }, 00:20:23.627 { 00:20:23.627 "subsystem": "vmd", 00:20:23.627 "config": [] 00:20:23.627 }, 00:20:23.627 { 00:20:23.627 "subsystem": "accel", 00:20:23.627 "config": [ 00:20:23.627 { 00:20:23.627 "method": "accel_set_options", 00:20:23.627 "params": { 00:20:23.627 "small_cache_size": 128, 00:20:23.627 "large_cache_size": 16, 00:20:23.627 "task_count": 2048, 00:20:23.627 "sequence_count": 2048, 00:20:23.627 "buf_count": 2048 00:20:23.627 } 00:20:23.627 } 00:20:23.627 ] 00:20:23.627 }, 00:20:23.627 { 00:20:23.627 "subsystem": "bdev", 00:20:23.628 "config": [ 00:20:23.628 { 00:20:23.628 "method": "bdev_set_options", 00:20:23.628 "params": { 00:20:23.628 "bdev_io_pool_size": 65535, 00:20:23.628 "bdev_io_cache_size": 256, 00:20:23.628 "bdev_auto_examine": true, 00:20:23.628 "iobuf_small_cache_size": 128, 00:20:23.628 "iobuf_large_cache_size": 16 00:20:23.628 } 00:20:23.628 }, 00:20:23.628 { 00:20:23.628 "method": "bdev_raid_set_options", 00:20:23.628 "params": { 00:20:23.628 "process_window_size_kb": 1024, 00:20:23.628 "process_max_bandwidth_mb_sec": 0 00:20:23.628 } 00:20:23.628 }, 00:20:23.628 { 00:20:23.628 "method": "bdev_iscsi_set_options", 00:20:23.628 "params": { 00:20:23.628 "timeout_sec": 30 00:20:23.628 } 00:20:23.628 }, 00:20:23.628 { 00:20:23.628 "method": "bdev_nvme_set_options", 00:20:23.628 "params": { 00:20:23.628 "action_on_timeout": "none", 00:20:23.628 "timeout_us": 0, 00:20:23.628 "timeout_admin_us": 0, 00:20:23.628 "keep_alive_timeout_ms": 10000, 00:20:23.628 "arbitration_burst": 0, 00:20:23.628 "low_priority_weight": 0, 00:20:23.628 "medium_priority_weight": 0, 00:20:23.628 "high_priority_weight": 0, 00:20:23.628 "nvme_adminq_poll_period_us": 10000, 00:20:23.628 "nvme_ioq_poll_period_us": 0, 00:20:23.628 "io_queue_requests": 0, 00:20:23.628 "delay_cmd_submit": true, 00:20:23.628 "transport_retry_count": 4, 00:20:23.628 "bdev_retry_count": 3, 00:20:23.628 "transport_ack_timeout": 0, 00:20:23.628 "ctrlr_loss_timeout_sec": 0, 00:20:23.628 "reconnect_delay_sec": 0, 00:20:23.628 "fast_io_fail_timeout_sec": 0, 00:20:23.628 "disable_auto_failback": false, 00:20:23.628 "generate_uuids": false, 00:20:23.628 "transport_tos": 0, 00:20:23.628 "nvme_error_stat": false, 00:20:23.628 "rdma_srq_size": 0, 00:20:23.628 "io_path_stat": false, 00:20:23.628 "allow_accel_sequence": false, 00:20:23.628 "rdma_max_cq_size": 0, 00:20:23.628 "rdma_cm_event_timeout_ms": 0, 00:20:23.628 "dhchap_digests": [ 00:20:23.628 "sha256", 00:20:23.628 "sha384", 00:20:23.628 "sha512" 00:20:23.628 ], 00:20:23.628 "dhchap_dhgroups": [ 00:20:23.628 "null", 00:20:23.628 "ffdhe2048", 00:20:23.628 "ffdhe3072", 00:20:23.628 "ffdhe4096", 00:20:23.628 "ffdhe6144", 00:20:23.628 "ffdhe8192" 00:20:23.628 ] 00:20:23.628 } 00:20:23.628 }, 00:20:23.628 { 00:20:23.628 "method": "bdev_nvme_set_hotplug", 00:20:23.628 "params": { 00:20:23.628 "period_us": 100000, 00:20:23.628 "enable": false 00:20:23.628 } 00:20:23.628 }, 00:20:23.628 { 00:20:23.628 "method": "bdev_malloc_create", 00:20:23.628 "params": { 00:20:23.628 "name": "malloc0", 00:20:23.628 "num_blocks": 8192, 00:20:23.628 "block_size": 4096, 00:20:23.628 "physical_block_size": 4096, 00:20:23.628 "uuid": "4773a469-0141-4953-93f2-78fef8ef66e3", 00:20:23.628 "optimal_io_boundary": 0, 00:20:23.628 "md_size": 0, 00:20:23.628 "dif_type": 0, 00:20:23.628 "dif_is_head_of_md": false, 00:20:23.628 "dif_pi_format": 0 00:20:23.628 } 00:20:23.628 }, 00:20:23.628 { 00:20:23.628 "method": "bdev_wait_for_examine" 00:20:23.628 } 00:20:23.628 ] 00:20:23.628 }, 00:20:23.628 { 00:20:23.628 "subsystem": "nbd", 00:20:23.628 "config": [] 00:20:23.628 }, 00:20:23.628 { 00:20:23.628 "subsystem": "scheduler", 00:20:23.628 "config": [ 00:20:23.628 { 00:20:23.628 "method": "framework_set_scheduler", 00:20:23.628 "params": { 00:20:23.628 "name": "static" 00:20:23.628 } 00:20:23.628 } 00:20:23.628 ] 00:20:23.628 }, 00:20:23.628 { 00:20:23.628 "subsystem": "nvmf", 00:20:23.628 "config": [ 00:20:23.628 { 00:20:23.628 "method": "nvmf_set_config", 00:20:23.628 "params": { 00:20:23.628 "discovery_filter": "match_any", 00:20:23.628 "admin_cmd_passthru": { 00:20:23.628 "identify_ctrlr": false 00:20:23.628 } 00:20:23.628 } 00:20:23.628 }, 00:20:23.628 { 00:20:23.628 "method": "nvmf_set_max_subsystems", 00:20:23.628 "params": { 00:20:23.628 "max_subsystems": 1024 00:20:23.628 } 00:20:23.628 }, 00:20:23.628 { 00:20:23.628 "method": "nvmf_set_crdt", 00:20:23.628 "params": { 00:20:23.628 "crdt1": 0, 00:20:23.628 "crdt2": 0, 00:20:23.628 "crdt3": 0 00:20:23.628 } 00:20:23.628 }, 00:20:23.628 { 00:20:23.628 "method": "nvmf_create_transport", 00:20:23.628 "params": { 00:20:23.628 "trtype": "TCP", 00:20:23.628 "max_queue_depth": 128, 00:20:23.628 "max_io_qpairs_per_ctrlr": 127, 00:20:23.628 "in_capsule_data_size": 4096, 00:20:23.628 "max_io_size": 131072, 00:20:23.628 "io_unit_size": 131072, 00:20:23.628 "max_aq_depth": 128, 00:20:23.628 "num_shared_buffers": 511, 00:20:23.628 "buf_cache_size": 4294967295, 00:20:23.628 "dif_insert_or_strip": false, 00:20:23.628 "zcopy": false, 00:20:23.628 "c2h_success": false, 00:20:23.628 "sock_priority": 0, 00:20:23.628 "abort_timeout_sec": 1, 00:20:23.628 "ack_timeout": 0, 00:20:23.628 "data_wr_pool_size": 0 00:20:23.628 } 00:20:23.628 }, 00:20:23.628 { 00:20:23.628 "method": "nvmf_create_subsystem", 00:20:23.628 "params": { 00:20:23.628 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:23.628 "allow_any_host": false, 00:20:23.628 "serial_number": "00000000000000000000", 00:20:23.628 "model_number": "SPDK bdev Controller", 00:20:23.628 "max_namespaces": 32, 00:20:23.628 "min_cntlid": 1, 00:20:23.628 "max_cntlid": 65519, 00:20:23.628 "ana_reporting": false 00:20:23.628 } 00:20:23.628 }, 00:20:23.628 { 00:20:23.628 "method": "nvmf_subsystem_add_host", 00:20:23.628 "params": { 00:20:23.628 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:23.628 "host": "nqn.2016-06.io.spdk:host1", 00:20:23.628 "psk": "key0" 00:20:23.628 } 00:20:23.628 }, 00:20:23.628 { 00:20:23.628 "method": "nvmf_subsystem_add_ns", 00:20:23.628 "params": { 00:20:23.628 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:23.628 "namespace": { 00:20:23.628 "nsid": 1, 00:20:23.628 "bdev_name": "malloc0", 00:20:23.628 "nguid": "4773A4690141495393F278FEF8EF66E3", 00:20:23.628 "uuid": "4773a469-0141-4953-93f2-78fef8ef66e3", 00:20:23.628 "no_auto_visible": false 00:20:23.628 } 00:20:23.628 } 00:20:23.628 }, 00:20:23.628 { 00:20:23.628 "method": "nvmf_subsystem_add_listener", 00:20:23.628 "params": { 00:20:23.628 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:23.628 "listen_address": { 00:20:23.628 "trtype": "TCP", 00:20:23.628 "adrfam": "IPv4", 00:20:23.628 "traddr": "10.0.0.2", 00:20:23.628 "trsvcid": "4420" 00:20:23.628 }, 00:20:23.628 "secure_channel": false, 00:20:23.628 "sock_impl": "ssl" 00:20:23.628 } 00:20:23.628 } 00:20:23.628 ] 00:20:23.628 } 00:20:23.628 ] 00:20:23.628 }' 00:20:23.628 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:23.628 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:23.628 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:23.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:23.628 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=78086 00:20:23.628 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 78086 00:20:23.629 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 78086 ']' 00:20:23.629 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:23.629 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:23.629 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:23.629 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:23.629 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:23.629 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:23.629 [2024-07-22 18:27:35.526006] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:20:23.629 [2024-07-22 18:27:35.526227] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:23.887 [2024-07-22 18:27:35.708673] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.146 [2024-07-22 18:27:35.951957] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:24.146 [2024-07-22 18:27:35.952028] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:24.146 [2024-07-22 18:27:35.952045] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:24.146 [2024-07-22 18:27:35.952059] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:24.146 [2024-07-22 18:27:35.952071] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:24.146 [2024-07-22 18:27:35.952247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:24.405 [2024-07-22 18:27:36.278842] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:24.704 [2024-07-22 18:27:36.465229] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:24.704 [2024-07-22 18:27:36.511611] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:24.704 [2024-07-22 18:27:36.511882] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:24.704 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:24.704 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:24.704 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:24.704 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:24.704 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:24.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:24.704 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:24.704 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=78118 00:20:24.704 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 78118 /var/tmp/bdevperf.sock 00:20:24.704 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 78118 ']' 00:20:24.704 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:24.704 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:24.704 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:24.704 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:24.704 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:20:24.704 "subsystems": [ 00:20:24.704 { 00:20:24.704 "subsystem": "keyring", 00:20:24.704 "config": [ 00:20:24.704 { 00:20:24.704 "method": "keyring_file_add_key", 00:20:24.704 "params": { 00:20:24.704 "name": "key0", 00:20:24.704 "path": "/tmp/tmp.jij0jggWrV" 00:20:24.704 } 00:20:24.704 } 00:20:24.704 ] 00:20:24.704 }, 00:20:24.704 { 00:20:24.704 "subsystem": "iobuf", 00:20:24.704 "config": [ 00:20:24.704 { 00:20:24.704 "method": "iobuf_set_options", 00:20:24.704 "params": { 00:20:24.704 "small_pool_count": 8192, 00:20:24.704 "large_pool_count": 1024, 00:20:24.704 "small_bufsize": 8192, 00:20:24.704 "large_bufsize": 135168 00:20:24.704 } 00:20:24.704 } 00:20:24.704 ] 00:20:24.704 }, 00:20:24.704 { 00:20:24.704 "subsystem": "sock", 00:20:24.704 "config": [ 00:20:24.704 { 00:20:24.704 "method": "sock_set_default_impl", 00:20:24.704 "params": { 00:20:24.704 "impl_name": "uring" 00:20:24.704 } 00:20:24.704 }, 00:20:24.704 { 00:20:24.704 "method": "sock_impl_set_options", 00:20:24.704 "params": { 00:20:24.704 "impl_name": "ssl", 00:20:24.704 "recv_buf_size": 4096, 00:20:24.704 "send_buf_size": 4096, 00:20:24.704 "enable_recv_pipe": true, 00:20:24.704 "enable_quickack": false, 00:20:24.704 "enable_placement_id": 0, 00:20:24.704 "enable_zerocopy_send_server": true, 00:20:24.704 "enable_zerocopy_send_client": false, 00:20:24.704 "zerocopy_threshold": 0, 00:20:24.704 "tls_version": 0, 00:20:24.704 "enable_ktls": false 00:20:24.704 } 00:20:24.704 }, 00:20:24.704 { 00:20:24.704 "method": "sock_impl_set_options", 00:20:24.704 "params": { 00:20:24.704 "impl_name": "posix", 00:20:24.704 "recv_buf_size": 2097152, 00:20:24.704 "send_buf_size": 2097152, 00:20:24.704 "enable_recv_pipe": true, 00:20:24.704 "enable_quickack": false, 00:20:24.704 "enable_placement_id": 0, 00:20:24.704 "enable_zerocopy_send_server": true, 00:20:24.704 "enable_zerocopy_send_client": false, 00:20:24.704 "zerocopy_threshold": 0, 00:20:24.704 "tls_version": 0, 00:20:24.704 "enable_ktls": false 00:20:24.704 } 00:20:24.704 }, 00:20:24.704 { 00:20:24.704 "method": "sock_impl_set_options", 00:20:24.704 "params": { 00:20:24.704 "impl_name": "uring", 00:20:24.704 "recv_buf_size": 2097152, 00:20:24.704 "send_buf_size": 2097152, 00:20:24.704 "enable_recv_pipe": true, 00:20:24.704 "enable_quickack": false, 00:20:24.704 "enable_placement_id": 0, 00:20:24.704 "enable_zerocopy_send_server": false, 00:20:24.704 "enable_zerocopy_send_client": false, 00:20:24.704 "zerocopy_threshold": 0, 00:20:24.704 "tls_version": 0, 00:20:24.704 "enable_ktls": false 00:20:24.704 } 00:20:24.704 } 00:20:24.704 ] 00:20:24.704 }, 00:20:24.704 { 00:20:24.704 "subsystem": "vmd", 00:20:24.704 "config": [] 00:20:24.704 }, 00:20:24.705 { 00:20:24.705 "subsystem": "accel", 00:20:24.705 "config": [ 00:20:24.705 { 00:20:24.705 "method": "accel_set_options", 00:20:24.705 "params": { 00:20:24.705 "small_cache_size": 128, 00:20:24.705 "large_cache_size": 16, 00:20:24.705 "task_count": 2048, 00:20:24.705 "sequence_count": 2048, 00:20:24.705 "buf_count": 2048 00:20:24.705 } 00:20:24.705 } 00:20:24.705 ] 00:20:24.705 }, 00:20:24.705 { 00:20:24.705 "subsystem": "bdev", 00:20:24.705 "config": [ 00:20:24.705 { 00:20:24.705 "method": "bdev_set_options", 00:20:24.705 "params": { 00:20:24.705 "bdev_io_pool_size": 65535, 00:20:24.705 "bdev_io_cache_size": 256, 00:20:24.705 "bdev_auto_examine": true, 00:20:24.705 "iobuf_small_cache_size": 128, 00:20:24.705 "iobuf_large_cache_size": 16 00:20:24.705 } 00:20:24.705 }, 00:20:24.705 { 00:20:24.705 "method": "bdev_raid_set_options", 00:20:24.705 "params": { 00:20:24.705 "process_window_size_kb": 1024, 00:20:24.705 "process_max_bandwidth_mb_sec": 0 00:20:24.705 } 00:20:24.705 }, 00:20:24.705 { 00:20:24.705 "method": "bdev_iscsi_set_options", 00:20:24.705 "params": { 00:20:24.705 "timeout_sec": 30 00:20:24.705 } 00:20:24.705 }, 00:20:24.705 { 00:20:24.705 "method": "bdev_nvme_set_options", 00:20:24.705 "params": { 00:20:24.705 "action_on_timeout": "none", 00:20:24.705 "timeout_us": 0, 00:20:24.705 "timeout_admin_us": 0, 00:20:24.705 "keep_alive_timeout_ms": 10000, 00:20:24.705 "arbitration_burst": 0, 00:20:24.705 "low_priority_weight": 0, 00:20:24.705 "medium_priority_weight": 0, 00:20:24.705 "high_priority_weight": 0, 00:20:24.705 "nvme_adminq_poll_period_us": 10000, 00:20:24.705 "nvme_ioq_poll_period_us": 0, 00:20:24.705 "io_queue_requests": 512, 00:20:24.705 "delay_cmd_submit": true, 00:20:24.705 "transport_retry_count": 4, 00:20:24.705 "bdev_retry_count": 3, 00:20:24.705 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:24.705 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:24.705 "transport_ack_timeout": 0, 00:20:24.705 "ctrlr_loss_timeout_sec": 0, 00:20:24.705 "reconnect_delay_sec": 0, 00:20:24.705 "fast_io_fail_timeout_sec": 0, 00:20:24.705 "disable_auto_failback": false, 00:20:24.705 "generate_uuids": false, 00:20:24.705 "transport_tos": 0, 00:20:24.705 "nvme_error_stat": false, 00:20:24.705 "rdma_srq_size": 0, 00:20:24.705 "io_path_stat": false, 00:20:24.705 "allow_accel_sequence": false, 00:20:24.705 "rdma_max_cq_size": 0, 00:20:24.705 "rdma_cm_event_timeout_ms": 0, 00:20:24.705 "dhchap_digests": [ 00:20:24.705 "sha256", 00:20:24.705 "sha384", 00:20:24.705 "sha512" 00:20:24.705 ], 00:20:24.705 "dhchap_dhgroups": [ 00:20:24.705 "null", 00:20:24.705 "ffdhe2048", 00:20:24.705 "ffdhe3072", 00:20:24.705 "ffdhe4096", 00:20:24.705 "ffdhe6144", 00:20:24.705 "ffdhe8192" 00:20:24.705 ] 00:20:24.705 } 00:20:24.705 }, 00:20:24.705 { 00:20:24.705 "method": "bdev_nvme_attach_controller", 00:20:24.705 "params": { 00:20:24.705 "name": "nvme0", 00:20:24.705 "trtype": "TCP", 00:20:24.705 "adrfam": "IPv4", 00:20:24.705 "traddr": "10.0.0.2", 00:20:24.705 "trsvcid": "4420", 00:20:24.705 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.705 "prchk_reftag": false, 00:20:24.705 "prchk_guard": false, 00:20:24.705 "ctrlr_loss_timeout_sec": 0, 00:20:24.705 "reconnect_delay_sec": 0, 00:20:24.705 "fast_io_fail_timeout_sec": 0, 00:20:24.705 "psk": "key0", 00:20:24.705 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:24.705 "hdgst": false, 00:20:24.705 "ddgst": false 00:20:24.705 } 00:20:24.705 }, 00:20:24.705 { 00:20:24.705 "method": "bdev_nvme_set_hotplug", 00:20:24.705 "params": { 00:20:24.705 "period_us": 100000, 00:20:24.705 "enable": false 00:20:24.705 } 00:20:24.705 }, 00:20:24.705 { 00:20:24.705 "method": "bdev_enable_histogram", 00:20:24.705 "params": { 00:20:24.705 "name": "nvme0n1", 00:20:24.705 "enable": true 00:20:24.705 } 00:20:24.705 }, 00:20:24.705 { 00:20:24.705 "method": "bdev_wait_for_examine" 00:20:24.705 } 00:20:24.705 ] 00:20:24.705 }, 00:20:24.705 { 00:20:24.705 "subsystem": "nbd", 00:20:24.705 "config": [] 00:20:24.705 } 00:20:24.705 ] 00:20:24.705 }' 00:20:25.001 [2024-07-22 18:27:36.697297] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:20:25.001 [2024-07-22 18:27:36.697702] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78118 ] 00:20:25.001 [2024-07-22 18:27:36.878837] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.260 [2024-07-22 18:27:37.144275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:25.519 [2024-07-22 18:27:37.436807] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:25.777 [2024-07-22 18:27:37.564452] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:25.777 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:25.777 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:25.777 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:25.777 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:20:26.036 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.036 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:26.294 Running I/O for 1 seconds... 00:20:27.231 00:20:27.231 Latency(us) 00:20:27.231 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.231 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:27.231 Verification LBA range: start 0x0 length 0x2000 00:20:27.231 nvme0n1 : 1.03 2741.26 10.71 0.00 0.00 45975.93 6255.71 40751.48 00:20:27.231 =================================================================================================================== 00:20:27.231 Total : 2741.26 10.71 0.00 0.00 45975.93 6255.71 40751.48 00:20:27.231 0 00:20:27.231 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:20:27.231 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:20:27.231 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:27.231 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:20:27.231 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:20:27.231 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:20:27.231 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:27.231 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:20:27.231 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:20:27.231 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:20:27.231 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:27.231 nvmf_trace.0 00:20:27.490 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:20:27.490 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 78118 00:20:27.490 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 78118 ']' 00:20:27.490 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 78118 00:20:27.490 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:27.490 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:27.490 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78118 00:20:27.490 killing process with pid 78118 00:20:27.490 Received shutdown signal, test time was about 1.000000 seconds 00:20:27.490 00:20:27.490 Latency(us) 00:20:27.490 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.490 =================================================================================================================== 00:20:27.490 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:27.490 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:27.490 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:27.490 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78118' 00:20:27.490 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 78118 00:20:27.490 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 78118 00:20:28.426 18:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:28.426 18:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:28.426 18:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:20:28.426 18:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:28.426 18:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:20:28.426 18:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:28.426 18:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:28.426 rmmod nvme_tcp 00:20:28.687 rmmod nvme_fabrics 00:20:28.687 rmmod nvme_keyring 00:20:28.687 18:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:28.687 18:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:20:28.687 18:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:20:28.687 18:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 78086 ']' 00:20:28.687 18:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 78086 00:20:28.687 18:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 78086 ']' 00:20:28.687 18:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 78086 00:20:28.687 18:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:28.687 18:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:28.687 18:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78086 00:20:28.687 killing process with pid 78086 00:20:28.687 18:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:28.687 18:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:28.687 18:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78086' 00:20:28.687 18:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 78086 00:20:28.687 18:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 78086 00:20:30.063 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:30.063 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:30.063 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:30.063 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:30.063 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:30.063 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:30.063 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:30.063 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:30.064 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:30.064 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.yoLjZdNKgL /tmp/tmp.5hW9wDMdT6 /tmp/tmp.jij0jggWrV 00:20:30.064 00:20:30.064 real 1m48.454s 00:20:30.064 user 2m53.433s 00:20:30.064 sys 0m27.791s 00:20:30.064 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:30.064 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:30.064 ************************************ 00:20:30.064 END TEST nvmf_tls 00:20:30.064 ************************************ 00:20:30.064 18:27:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:20:30.064 18:27:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:30.064 18:27:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:30.064 18:27:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:30.064 18:27:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:30.064 ************************************ 00:20:30.064 START TEST nvmf_fips 00:20:30.064 ************************************ 00:20:30.064 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:30.064 * Looking for test storage... 00:20:30.064 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:20:30.064 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:30.064 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:30.064 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:30.064 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:30.064 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:30.064 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:30.064 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:30.064 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:30.064 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:30.064 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:30.064 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:30.064 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:30.064 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:20:30.064 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=1e224894-a0fc-4112-b81b-a37606f50c96 00:20:30.064 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:30.064 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:30.064 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:30.064 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:30.064 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:30.064 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:30.064 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:30.064 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:30.064 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.064 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.064 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.064 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:30.064 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.064 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:20:30.064 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:30.064 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:30.064 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:30.064 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:30.064 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:30.064 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:30.064 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:30.064 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:30.064 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:30.064 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:20:30.064 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:20:30.064 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:20:30.064 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:20:30.064 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:20:30.064 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:20:30.064 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:20:30.064 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:20:30.064 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:20:30.064 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:20:30.064 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:20:30.064 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:20:30.064 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:20:30.064 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:20:30.064 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:20:30.064 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:20:30.064 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:20:30.064 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:30.064 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:20:30.064 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:30.064 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:20:30.064 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:30.064 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:30.064 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:30.064 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:20:30.064 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:20:30.064 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:30.064 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:30.064 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:30.064 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:20:30.064 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:30.064 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:30.064 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:30.064 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:30.064 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:20:30.064 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:30.064 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:30.064 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:30.064 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:20:30.064 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:30.064 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:30.064 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:30.064 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:30.064 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:30.064 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:30.065 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:30.065 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:30.065 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:30.065 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:20:30.065 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:20:30.065 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:20:30.065 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:20:30.065 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:20:30.065 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:30.065 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:30.065 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:30.065 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:30.065 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:30.065 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:30.065 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:20:30.065 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:20:30.065 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:30.065 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:20:30.324 Error setting digest 00:20:30.324 00724FA1557F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:20:30.324 00724FA1557F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:30.324 Cannot find device "nvmf_tgt_br" 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # true 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:30.324 Cannot find device "nvmf_tgt_br2" 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # true 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:30.324 Cannot find device "nvmf_tgt_br" 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # true 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:30.324 Cannot find device "nvmf_tgt_br2" 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # true 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:30.324 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:30.324 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:20:30.324 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:30.583 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:30.583 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:30.584 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:30.584 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:30.584 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:30.584 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:30.584 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:30.584 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:30.584 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:30.584 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:30.584 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:30.584 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:30.584 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:30.584 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:30.584 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:30.584 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:30.584 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:30.584 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:30.584 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:30.584 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:30.584 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:30.584 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:30.584 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:30.584 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:30.584 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:20:30.584 00:20:30.584 --- 10.0.0.2 ping statistics --- 00:20:30.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:30.584 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:20:30.584 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:30.584 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:30.584 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.030 ms 00:20:30.584 00:20:30.584 --- 10.0.0.3 ping statistics --- 00:20:30.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:30.584 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:20:30.584 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:30.584 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:30.584 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:20:30.584 00:20:30.584 --- 10.0.0.1 ping statistics --- 00:20:30.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:30.584 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:20:30.584 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:30.584 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:20:30.584 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:30.584 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:30.584 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:30.584 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:30.584 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:30.584 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:30.584 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:30.584 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:20:30.584 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:30.584 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:30.584 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:30.584 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=78406 00:20:30.584 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:30.584 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 78406 00:20:30.584 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 78406 ']' 00:20:30.584 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:30.584 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:30.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:30.584 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:30.584 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:30.584 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:30.842 [2024-07-22 18:27:42.739818] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:20:30.842 [2024-07-22 18:27:42.739981] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:31.100 [2024-07-22 18:27:42.925624] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.359 [2024-07-22 18:27:43.225076] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:31.359 [2024-07-22 18:27:43.225170] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:31.359 [2024-07-22 18:27:43.225197] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:31.359 [2024-07-22 18:27:43.225267] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:31.359 [2024-07-22 18:27:43.225283] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:31.359 [2024-07-22 18:27:43.225354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:31.616 [2024-07-22 18:27:43.446778] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:31.616 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:31.616 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:20:31.616 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:31.616 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:31.616 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:31.874 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:31.874 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:20:31.874 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:31.874 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:20:31.874 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:31.874 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:20:31.874 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:20:31.874 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:20:31.874 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:32.134 [2024-07-22 18:27:43.904235] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:32.134 [2024-07-22 18:27:43.920245] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:32.134 [2024-07-22 18:27:43.920543] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:32.134 [2024-07-22 18:27:43.987430] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:32.134 malloc0 00:20:32.134 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:32.134 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=78444 00:20:32.134 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:32.134 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 78444 /var/tmp/bdevperf.sock 00:20:32.134 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 78444 ']' 00:20:32.134 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:32.134 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:32.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:32.134 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:32.134 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:32.134 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:32.411 [2024-07-22 18:27:44.182008] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:20:32.412 [2024-07-22 18:27:44.182228] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78444 ] 00:20:32.412 [2024-07-22 18:27:44.360251] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:32.719 [2024-07-22 18:27:44.672870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:32.977 [2024-07-22 18:27:44.883485] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:33.236 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:33.236 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:20:33.236 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:20:33.494 [2024-07-22 18:27:45.278287] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:33.494 [2024-07-22 18:27:45.278485] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:33.494 TLSTESTn1 00:20:33.494 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:33.753 Running I/O for 10 seconds... 00:20:43.742 00:20:43.742 Latency(us) 00:20:43.742 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:43.742 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:43.742 Verification LBA range: start 0x0 length 0x2000 00:20:43.742 TLSTESTn1 : 10.02 2681.74 10.48 0.00 0.00 47642.26 11379.43 37415.10 00:20:43.742 =================================================================================================================== 00:20:43.742 Total : 2681.74 10.48 0.00 0.00 47642.26 11379.43 37415.10 00:20:43.742 0 00:20:43.742 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:43.742 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:43.742 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:20:43.742 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:20:43.742 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:20:43.742 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:43.742 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:20:43.742 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:20:43.742 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:20:43.742 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:43.742 nvmf_trace.0 00:20:43.742 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:20:43.742 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 78444 00:20:43.742 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 78444 ']' 00:20:43.742 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 78444 00:20:43.742 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:20:43.742 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:43.742 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78444 00:20:43.742 killing process with pid 78444 00:20:43.742 Received shutdown signal, test time was about 10.000000 seconds 00:20:43.742 00:20:43.742 Latency(us) 00:20:43.742 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:43.742 =================================================================================================================== 00:20:43.742 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:43.742 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:43.742 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:43.742 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78444' 00:20:43.742 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@967 -- # kill 78444 00:20:43.742 [2024-07-22 18:27:55.707564] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:43.742 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # wait 78444 00:20:45.116 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:45.116 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:45.116 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:20:45.116 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:45.116 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:20:45.116 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:45.116 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:45.116 rmmod nvme_tcp 00:20:45.116 rmmod nvme_fabrics 00:20:45.116 rmmod nvme_keyring 00:20:45.116 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:45.116 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:20:45.116 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:20:45.116 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 78406 ']' 00:20:45.116 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 78406 00:20:45.116 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 78406 ']' 00:20:45.116 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 78406 00:20:45.116 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:20:45.116 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:45.116 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78406 00:20:45.116 killing process with pid 78406 00:20:45.116 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:45.116 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:45.116 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78406' 00:20:45.116 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@967 -- # kill 78406 00:20:45.116 [2024-07-22 18:27:57.040012] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:45.116 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # wait 78406 00:20:46.493 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:46.493 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:46.493 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:46.493 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:46.493 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:46.493 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:46.493 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:46.493 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:46.493 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:46.493 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:20:46.493 ************************************ 00:20:46.493 END TEST nvmf_fips 00:20:46.493 ************************************ 00:20:46.493 00:20:46.493 real 0m16.522s 00:20:46.493 user 0m23.621s 00:20:46.493 sys 0m5.410s 00:20:46.493 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:46.493 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:46.493 18:27:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:20:46.493 18:27:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 1 -eq 1 ']' 00:20:46.493 18:27:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@46 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:20:46.493 18:27:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:46.493 18:27:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:46.493 18:27:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:46.493 ************************************ 00:20:46.493 START TEST nvmf_fuzz 00:20:46.493 ************************************ 00:20:46.493 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:20:46.752 * Looking for test storage... 00:20:46.752 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:46.752 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:46.752 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:20:46.752 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:46.752 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:46.752 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:46.752 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:46.752 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:46.752 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:46.752 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:46.752 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:46.752 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:46.752 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:46.752 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:20:46.752 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=1e224894-a0fc-4112-b81b-a37606f50c96 00:20:46.752 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:46.752 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:46.752 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:46.752 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:46.752 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:46.752 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:46.752 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:46.752 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:46.752 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.752 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.753 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.753 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:20:46.753 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.753 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:20:46.753 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:46.753 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:46.753 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:46.753 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:46.753 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:46.753 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:46.753 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:46.753 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:46.753 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:20:46.753 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:46.753 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:46.753 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:46.753 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:46.753 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:46.753 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:46.753 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:46.753 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:46.753 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:46.753 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:46.753 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:46.753 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:46.753 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:46.753 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:46.753 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:46.753 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:46.753 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:46.753 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:46.753 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:46.753 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:46.753 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:46.753 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:46.753 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:46.753 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:46.753 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:46.753 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:46.753 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:46.753 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:46.753 Cannot find device "nvmf_tgt_br" 00:20:46.753 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@155 -- # true 00:20:46.753 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:46.753 Cannot find device "nvmf_tgt_br2" 00:20:46.753 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@156 -- # true 00:20:46.753 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:46.753 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:46.753 Cannot find device "nvmf_tgt_br" 00:20:46.753 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@158 -- # true 00:20:46.753 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:46.753 Cannot find device "nvmf_tgt_br2" 00:20:46.753 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@159 -- # true 00:20:46.753 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:46.753 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:46.753 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:46.753 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:46.753 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # true 00:20:46.753 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:46.753 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:46.753 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # true 00:20:46.753 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:46.753 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:46.753 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:46.753 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:46.753 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:46.753 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:47.012 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:47.012 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:47.012 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:47.012 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:47.012 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:47.012 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:47.012 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:47.012 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:47.012 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:47.012 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:47.012 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:47.012 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:47.012 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:47.012 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:47.012 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:47.012 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:47.012 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:47.012 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:47.012 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:47.012 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:20:47.012 00:20:47.012 --- 10.0.0.2 ping statistics --- 00:20:47.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.012 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:20:47.012 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:47.012 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:47.012 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:20:47.012 00:20:47.012 --- 10.0.0.3 ping statistics --- 00:20:47.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.012 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:20:47.012 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:47.012 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:47.012 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:20:47.012 00:20:47.012 --- 10.0.0.1 ping statistics --- 00:20:47.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.012 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:20:47.012 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:47.012 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@433 -- # return 0 00:20:47.013 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:47.013 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:47.013 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:47.013 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:47.013 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:47.013 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:47.013 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:47.013 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=78797 00:20:47.013 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:47.013 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:20:47.013 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 78797 00:20:47.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:47.013 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@829 -- # '[' -z 78797 ']' 00:20:47.013 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:47.013 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:47.013 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:47.013 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:47.013 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:48.389 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:48.389 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@862 -- # return 0 00:20:48.389 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:48.389 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.389 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:48.389 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.389 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:20:48.389 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.389 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:48.389 Malloc0 00:20:48.389 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.389 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:48.389 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.389 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:48.389 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.389 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:48.389 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.389 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:48.389 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.389 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:48.389 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.389 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:48.389 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.389 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:20:48.389 18:28:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:20:49.324 Shutting down the fuzz application 00:20:49.324 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:20:50.290 Shutting down the fuzz application 00:20:50.290 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:50.290 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.290 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:50.290 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.290 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:20:50.290 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:20:50.290 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:50.290 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:20:50.290 18:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:50.290 18:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:20:50.290 18:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:50.290 18:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:50.290 rmmod nvme_tcp 00:20:50.290 rmmod nvme_fabrics 00:20:50.290 rmmod nvme_keyring 00:20:50.290 18:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:50.290 18:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:20:50.290 18:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:20:50.290 18:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 78797 ']' 00:20:50.290 18:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 78797 00:20:50.290 18:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@948 -- # '[' -z 78797 ']' 00:20:50.290 18:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@952 -- # kill -0 78797 00:20:50.290 18:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@953 -- # uname 00:20:50.290 18:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:50.290 18:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78797 00:20:50.290 killing process with pid 78797 00:20:50.290 18:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:50.290 18:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:50.290 18:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78797' 00:20:50.290 18:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@967 -- # kill 78797 00:20:50.290 18:28:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # wait 78797 00:20:51.664 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:51.664 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:51.664 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:51.664 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:51.664 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:51.664 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:51.664 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:51.664 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:51.665 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:51.665 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:20:51.665 00:20:51.665 real 0m5.074s 00:20:51.665 user 0m6.109s 00:20:51.665 sys 0m0.905s 00:20:51.665 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:51.665 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:51.665 ************************************ 00:20:51.665 END TEST nvmf_fuzz 00:20:51.665 ************************************ 00:20:51.665 18:28:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:20:51.665 18:28:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:20:51.665 18:28:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:51.665 18:28:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:51.665 18:28:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:51.665 ************************************ 00:20:51.665 START TEST nvmf_multiconnection 00:20:51.665 ************************************ 00:20:51.665 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:20:51.665 * Looking for test storage... 00:20:51.665 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:51.665 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:51.924 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:20:51.924 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:51.924 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:51.924 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:51.924 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:51.924 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:51.924 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:51.924 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:51.924 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:51.924 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:51.924 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:51.924 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:20:51.924 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=1e224894-a0fc-4112-b81b-a37606f50c96 00:20:51.924 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:51.924 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:51.924 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:51.924 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:51.924 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:51.924 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:51.924 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:51.924 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:51.924 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.924 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.924 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.924 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:51.925 Cannot find device "nvmf_tgt_br" 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@155 -- # true 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:51.925 Cannot find device "nvmf_tgt_br2" 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@156 -- # true 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:51.925 Cannot find device "nvmf_tgt_br" 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@158 -- # true 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:51.925 Cannot find device "nvmf_tgt_br2" 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@159 -- # true 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:51.925 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # true 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:51.925 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # true 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:51.925 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:52.184 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:52.184 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:52.184 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:52.184 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:52.184 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:52.184 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:52.184 18:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:52.184 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:52.184 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:52.184 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:52.184 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:52.184 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:52.184 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:20:52.184 00:20:52.184 --- 10.0.0.2 ping statistics --- 00:20:52.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:52.184 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:20:52.184 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:52.184 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:52.184 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:20:52.184 00:20:52.184 --- 10.0.0.3 ping statistics --- 00:20:52.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:52.184 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:20:52.184 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:52.184 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:52.184 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:20:52.184 00:20:52.184 --- 10.0.0.1 ping statistics --- 00:20:52.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:52.184 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:20:52.184 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:52.184 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@433 -- # return 0 00:20:52.184 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:52.184 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:52.184 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:52.184 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:52.184 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:52.184 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:52.184 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:52.184 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:20:52.184 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:52.184 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:52.184 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:52.184 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=79035 00:20:52.184 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 79035 00:20:52.184 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:52.184 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@829 -- # '[' -z 79035 ']' 00:20:52.184 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:52.184 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:52.184 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:52.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:52.184 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:52.184 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:52.184 [2024-07-22 18:28:04.197851] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:20:52.184 [2024-07-22 18:28:04.198027] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:52.443 [2024-07-22 18:28:04.377360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:52.701 [2024-07-22 18:28:04.634408] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:52.701 [2024-07-22 18:28:04.634492] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:52.701 [2024-07-22 18:28:04.634510] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:52.701 [2024-07-22 18:28:04.634525] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:52.701 [2024-07-22 18:28:04.634541] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:52.701 [2024-07-22 18:28:04.634804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:52.701 [2024-07-22 18:28:04.635080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:52.701 [2024-07-22 18:28:04.635810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:52.701 [2024-07-22 18:28:04.635842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:52.959 [2024-07-22 18:28:04.842383] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:53.218 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:53.218 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@862 -- # return 0 00:20:53.218 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:53.218 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:53.218 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:53.218 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:53.218 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:53.218 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.218 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:53.218 [2024-07-22 18:28:05.174140] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:53.218 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.218 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:20:53.218 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:53.218 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:53.218 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.218 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:53.476 Malloc1 00:20:53.476 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.476 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:20:53.476 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.476 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:53.476 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.476 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:53.476 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.476 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:53.476 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.476 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:53.476 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.476 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:53.476 [2024-07-22 18:28:05.292630] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:53.476 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.476 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:53.476 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:20:53.476 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.476 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:53.476 Malloc2 00:20:53.476 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.476 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:20:53.476 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.476 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:53.476 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.476 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:20:53.476 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.476 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:53.476 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.476 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:53.476 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.476 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:53.476 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.476 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:53.476 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:20:53.476 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.476 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:53.476 Malloc3 00:20:53.476 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.476 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:20:53.476 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.476 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:53.476 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.476 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:20:53.476 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.476 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:53.735 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.735 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:20:53.735 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.735 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:53.735 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.735 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:53.735 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:20:53.735 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.735 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:53.735 Malloc4 00:20:53.735 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.735 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:20:53.735 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.735 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:53.735 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.735 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:20:53.735 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.735 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:53.735 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.735 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:20:53.735 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.735 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:53.735 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.735 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:53.735 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:20:53.735 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.735 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:53.735 Malloc5 00:20:53.735 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.736 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:20:53.736 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.736 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:53.736 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.736 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:20:53.736 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.736 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:53.736 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.736 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:20:53.736 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.736 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:53.736 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.736 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:53.736 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:20:53.736 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.736 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:53.995 Malloc6 00:20:53.995 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.995 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:20:53.995 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.995 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:53.995 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.995 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:20:53.995 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.995 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:53.995 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.995 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:20:53.995 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.995 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:53.995 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.995 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:53.995 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:20:53.995 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.995 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:53.995 Malloc7 00:20:53.995 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.995 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:20:53.995 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.995 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:53.995 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.995 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:20:53.995 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.995 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:53.995 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.995 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:20:53.995 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.995 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:53.995 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.995 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:53.995 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:20:53.995 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.995 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:53.995 Malloc8 00:20:53.995 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.995 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:20:53.995 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.995 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:53.995 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.995 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:20:53.995 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.995 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:54.253 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.254 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:20:54.254 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.254 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:54.254 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.254 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:54.254 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:20:54.254 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.254 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:54.254 Malloc9 00:20:54.254 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.254 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:20:54.254 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.254 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:54.254 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.254 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:20:54.254 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.254 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:54.254 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.254 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:20:54.254 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.254 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:54.254 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.254 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:54.254 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:20:54.254 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.254 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:54.254 Malloc10 00:20:54.254 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.254 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:20:54.254 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.254 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:54.254 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.254 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:20:54.254 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.254 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:54.254 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.254 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:20:54.254 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.254 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:54.254 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.254 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:54.254 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:20:54.254 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.254 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:54.512 Malloc11 00:20:54.512 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.512 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:20:54.512 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.512 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:54.512 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.512 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:20:54.512 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.512 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:54.512 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.512 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:20:54.512 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.512 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:54.512 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.512 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:20:54.512 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:54.512 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid=1e224894-a0fc-4112-b81b-a37606f50c96 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:54.512 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:20:54.512 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:20:54.512 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:20:54.513 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:20:54.513 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:20:56.482 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:20:56.482 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:20:56.482 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:20:56.483 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:20:56.483 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:20:56.483 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:20:56.483 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:56.483 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid=1e224894-a0fc-4112-b81b-a37606f50c96 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:20:56.741 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:20:56.741 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:20:56.741 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:20:56.741 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:20:56.741 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:20:58.642 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:20:58.642 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:20:58.642 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:20:58.642 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:20:58.642 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:20:58.642 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:20:58.642 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:58.642 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid=1e224894-a0fc-4112-b81b-a37606f50c96 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:20:58.899 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:20:58.899 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:20:58.899 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:20:58.899 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:20:58.899 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:21:00.799 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:21:00.799 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:21:00.799 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:21:00.799 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:21:00.799 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:21:00.800 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:21:00.800 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:00.800 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid=1e224894-a0fc-4112-b81b-a37606f50c96 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:21:01.058 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:21:01.058 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:21:01.058 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:21:01.058 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:21:01.058 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:21:02.959 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:21:02.959 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:21:02.959 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:21:02.959 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:21:02.959 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:21:02.959 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:21:02.959 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:02.959 18:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid=1e224894-a0fc-4112-b81b-a37606f50c96 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:21:03.218 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:21:03.218 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:21:03.218 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:21:03.218 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:21:03.218 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:21:05.120 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:21:05.120 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:21:05.120 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:21:05.120 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:21:05.120 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:21:05.120 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:21:05.120 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:05.120 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid=1e224894-a0fc-4112-b81b-a37606f50c96 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:21:05.378 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:21:05.378 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:21:05.378 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:21:05.379 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:21:05.379 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:21:07.280 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:21:07.280 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:21:07.280 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:21:07.280 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:21:07.280 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:21:07.280 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:21:07.280 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:07.280 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid=1e224894-a0fc-4112-b81b-a37606f50c96 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:21:07.538 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:21:07.538 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:21:07.538 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:21:07.538 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:21:07.538 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:21:09.438 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:21:09.438 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:21:09.438 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:21:09.438 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:21:09.438 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:21:09.438 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:21:09.438 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:09.438 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid=1e224894-a0fc-4112-b81b-a37606f50c96 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:21:09.697 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:21:09.697 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:21:09.697 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:21:09.697 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:21:09.697 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:21:11.598 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:21:11.598 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:21:11.598 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:21:11.598 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:21:11.598 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:21:11.598 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:21:11.598 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:11.598 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid=1e224894-a0fc-4112-b81b-a37606f50c96 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:21:11.856 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:21:11.856 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:21:11.856 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:21:11.856 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:21:11.856 18:28:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:21:13.754 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:21:13.754 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:21:13.754 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:21:13.754 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:21:13.754 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:21:13.754 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:21:13.754 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:13.754 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid=1e224894-a0fc-4112-b81b-a37606f50c96 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:21:14.012 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:21:14.012 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:21:14.012 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:21:14.012 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:21:14.012 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:21:15.912 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:21:15.912 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:21:15.912 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:21:16.172 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:21:16.172 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:21:16.172 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:21:16.172 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:16.172 18:28:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid=1e224894-a0fc-4112-b81b-a37606f50c96 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:21:16.172 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:21:16.172 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:21:16.172 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:21:16.172 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:21:16.172 18:28:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:21:18.704 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:21:18.704 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:21:18.704 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:21:18.704 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:21:18.704 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:21:18.704 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:21:18.704 18:28:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:21:18.704 [global] 00:21:18.704 thread=1 00:21:18.704 invalidate=1 00:21:18.704 rw=read 00:21:18.704 time_based=1 00:21:18.704 runtime=10 00:21:18.704 ioengine=libaio 00:21:18.704 direct=1 00:21:18.704 bs=262144 00:21:18.704 iodepth=64 00:21:18.704 norandommap=1 00:21:18.704 numjobs=1 00:21:18.704 00:21:18.704 [job0] 00:21:18.704 filename=/dev/nvme0n1 00:21:18.704 [job1] 00:21:18.704 filename=/dev/nvme10n1 00:21:18.704 [job2] 00:21:18.704 filename=/dev/nvme1n1 00:21:18.704 [job3] 00:21:18.704 filename=/dev/nvme2n1 00:21:18.704 [job4] 00:21:18.704 filename=/dev/nvme3n1 00:21:18.704 [job5] 00:21:18.704 filename=/dev/nvme4n1 00:21:18.704 [job6] 00:21:18.704 filename=/dev/nvme5n1 00:21:18.704 [job7] 00:21:18.704 filename=/dev/nvme6n1 00:21:18.704 [job8] 00:21:18.704 filename=/dev/nvme7n1 00:21:18.704 [job9] 00:21:18.704 filename=/dev/nvme8n1 00:21:18.704 [job10] 00:21:18.704 filename=/dev/nvme9n1 00:21:18.704 Could not set queue depth (nvme0n1) 00:21:18.704 Could not set queue depth (nvme10n1) 00:21:18.704 Could not set queue depth (nvme1n1) 00:21:18.704 Could not set queue depth (nvme2n1) 00:21:18.704 Could not set queue depth (nvme3n1) 00:21:18.704 Could not set queue depth (nvme4n1) 00:21:18.704 Could not set queue depth (nvme5n1) 00:21:18.704 Could not set queue depth (nvme6n1) 00:21:18.704 Could not set queue depth (nvme7n1) 00:21:18.704 Could not set queue depth (nvme8n1) 00:21:18.704 Could not set queue depth (nvme9n1) 00:21:18.704 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:18.704 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:18.704 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:18.704 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:18.704 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:18.704 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:18.704 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:18.704 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:18.704 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:18.704 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:18.704 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:18.704 fio-3.35 00:21:18.704 Starting 11 threads 00:21:30.937 00:21:30.937 job0: (groupid=0, jobs=1): err= 0: pid=79490: Mon Jul 22 18:28:40 2024 00:21:30.937 read: IOPS=613, BW=153MiB/s (161MB/s)(1536MiB/10018msec) 00:21:30.937 slat (usec): min=18, max=113332, avg=1592.96, stdev=3972.85 00:21:30.937 clat (msec): min=8, max=220, avg=102.65, stdev=36.08 00:21:30.937 lat (msec): min=10, max=220, avg=104.24, stdev=36.63 00:21:30.937 clat percentiles (msec): 00:21:30.937 | 1.00th=[ 38], 5.00th=[ 63], 10.00th=[ 66], 20.00th=[ 69], 00:21:30.937 | 30.00th=[ 72], 40.00th=[ 75], 50.00th=[ 82], 60.00th=[ 132], 00:21:30.937 | 70.00th=[ 136], 80.00th=[ 138], 90.00th=[ 144], 95.00th=[ 146], 00:21:30.937 | 99.00th=[ 163], 99.50th=[ 201], 99.90th=[ 220], 99.95th=[ 222], 00:21:30.937 | 99.99th=[ 222] 00:21:30.937 bw ( KiB/s): min=92857, max=246272, per=8.63%, avg=155659.45, stdev=56656.81, samples=20 00:21:30.937 iops : min= 362, max= 962, avg=607.95, stdev=221.38, samples=20 00:21:30.937 lat (msec) : 10=0.02%, 20=0.36%, 50=1.11%, 100=50.39%, 250=48.13% 00:21:30.937 cpu : usr=0.33%, sys=2.34%, ctx=1435, majf=0, minf=4097 00:21:30.937 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:21:30.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.937 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:30.937 issued rwts: total=6144,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:30.937 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:30.937 job1: (groupid=0, jobs=1): err= 0: pid=79491: Mon Jul 22 18:28:40 2024 00:21:30.937 read: IOPS=493, BW=123MiB/s (129MB/s)(1249MiB/10119msec) 00:21:30.937 slat (usec): min=18, max=64583, avg=1997.98, stdev=4603.22 00:21:30.937 clat (msec): min=49, max=273, avg=127.46, stdev=28.04 00:21:30.937 lat (msec): min=49, max=273, avg=129.46, stdev=28.52 00:21:30.937 clat percentiles (msec): 00:21:30.937 | 1.00th=[ 64], 5.00th=[ 69], 10.00th=[ 74], 20.00th=[ 111], 00:21:30.937 | 30.00th=[ 132], 40.00th=[ 136], 50.00th=[ 136], 60.00th=[ 138], 00:21:30.937 | 70.00th=[ 140], 80.00th=[ 144], 90.00th=[ 150], 95.00th=[ 155], 00:21:30.937 | 99.00th=[ 178], 99.50th=[ 203], 99.90th=[ 262], 99.95th=[ 262], 00:21:30.937 | 99.99th=[ 275] 00:21:30.937 bw ( KiB/s): min=100864, max=222720, per=7.00%, avg=126227.60, stdev=29502.19, samples=20 00:21:30.937 iops : min= 394, max= 870, avg=492.95, stdev=115.22, samples=20 00:21:30.938 lat (msec) : 50=0.04%, 100=18.32%, 250=81.52%, 500=0.12% 00:21:30.938 cpu : usr=0.28%, sys=2.03%, ctx=1188, majf=0, minf=4097 00:21:30.938 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:21:30.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.938 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:30.938 issued rwts: total=4995,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:30.938 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:30.938 job2: (groupid=0, jobs=1): err= 0: pid=79492: Mon Jul 22 18:28:40 2024 00:21:30.938 read: IOPS=887, BW=222MiB/s (233MB/s)(2240MiB/10095msec) 00:21:30.938 slat (usec): min=17, max=31574, avg=1085.29, stdev=2532.58 00:21:30.938 clat (msec): min=11, max=193, avg=70.93, stdev=24.54 00:21:30.938 lat (msec): min=12, max=204, avg=72.02, stdev=24.88 00:21:30.938 clat percentiles (msec): 00:21:30.938 | 1.00th=[ 35], 5.00th=[ 37], 10.00th=[ 39], 20.00th=[ 41], 00:21:30.938 | 30.00th=[ 66], 40.00th=[ 69], 50.00th=[ 71], 60.00th=[ 73], 00:21:30.938 | 70.00th=[ 77], 80.00th=[ 99], 90.00th=[ 107], 95.00th=[ 111], 00:21:30.938 | 99.00th=[ 122], 99.50th=[ 126], 99.90th=[ 186], 99.95th=[ 194], 00:21:30.938 | 99.99th=[ 194] 00:21:30.938 bw ( KiB/s): min=144896, max=428032, per=12.62%, avg=227676.00, stdev=81706.43, samples=20 00:21:30.938 iops : min= 566, max= 1672, avg=889.20, stdev=319.01, samples=20 00:21:30.938 lat (msec) : 20=0.09%, 50=25.22%, 100=56.28%, 250=18.40% 00:21:30.938 cpu : usr=0.48%, sys=3.37%, ctx=1977, majf=0, minf=4097 00:21:30.938 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:21:30.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.938 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:30.938 issued rwts: total=8960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:30.938 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:30.938 job3: (groupid=0, jobs=1): err= 0: pid=79493: Mon Jul 22 18:28:40 2024 00:21:30.938 read: IOPS=754, BW=189MiB/s (198MB/s)(1903MiB/10092msec) 00:21:30.938 slat (usec): min=16, max=27668, avg=1288.22, stdev=2921.58 00:21:30.938 clat (msec): min=3, max=199, avg=83.41, stdev=19.34 00:21:30.938 lat (msec): min=4, max=199, avg=84.70, stdev=19.57 00:21:30.938 clat percentiles (msec): 00:21:30.938 | 1.00th=[ 45], 5.00th=[ 64], 10.00th=[ 67], 20.00th=[ 69], 00:21:30.938 | 30.00th=[ 71], 40.00th=[ 73], 50.00th=[ 75], 60.00th=[ 80], 00:21:30.938 | 70.00th=[ 99], 80.00th=[ 105], 90.00th=[ 110], 95.00th=[ 114], 00:21:30.938 | 99.00th=[ 126], 99.50th=[ 136], 99.90th=[ 194], 99.95th=[ 201], 00:21:30.938 | 99.99th=[ 201] 00:21:30.938 bw ( KiB/s): min=143872, max=230400, per=10.71%, avg=193213.40, stdev=34770.08, samples=20 00:21:30.938 iops : min= 562, max= 900, avg=754.65, stdev=135.81, samples=20 00:21:30.938 lat (msec) : 4=0.03%, 10=0.04%, 20=0.25%, 50=0.76%, 100=70.87% 00:21:30.938 lat (msec) : 250=28.06% 00:21:30.938 cpu : usr=0.41%, sys=2.28%, ctx=1800, majf=0, minf=4097 00:21:30.938 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:21:30.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.938 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:30.938 issued rwts: total=7613,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:30.938 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:30.938 job4: (groupid=0, jobs=1): err= 0: pid=79494: Mon Jul 22 18:28:40 2024 00:21:30.938 read: IOPS=561, BW=140MiB/s (147MB/s)(1411MiB/10046msec) 00:21:30.938 slat (usec): min=17, max=93840, avg=1767.73, stdev=4203.29 00:21:30.938 clat (msec): min=40, max=181, avg=112.02, stdev=29.81 00:21:30.938 lat (msec): min=45, max=231, avg=113.79, stdev=30.34 00:21:30.938 clat percentiles (msec): 00:21:30.938 | 1.00th=[ 63], 5.00th=[ 67], 10.00th=[ 70], 20.00th=[ 75], 00:21:30.938 | 30.00th=[ 86], 40.00th=[ 106], 50.00th=[ 128], 60.00th=[ 133], 00:21:30.938 | 70.00th=[ 136], 80.00th=[ 140], 90.00th=[ 142], 95.00th=[ 146], 00:21:30.938 | 99.00th=[ 157], 99.50th=[ 180], 99.90th=[ 180], 99.95th=[ 180], 00:21:30.938 | 99.99th=[ 182] 00:21:30.938 bw ( KiB/s): min=114688, max=225852, per=7.92%, avg=142825.35, stdev=40953.03, samples=20 00:21:30.938 iops : min= 448, max= 882, avg=557.85, stdev=159.98, samples=20 00:21:30.938 lat (msec) : 50=0.27%, 100=36.56%, 250=63.18% 00:21:30.938 cpu : usr=0.38%, sys=2.25%, ctx=1321, majf=0, minf=4097 00:21:30.938 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:21:30.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.938 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:30.938 issued rwts: total=5643,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:30.938 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:30.938 job5: (groupid=0, jobs=1): err= 0: pid=79495: Mon Jul 22 18:28:40 2024 00:21:30.938 read: IOPS=493, BW=123MiB/s (129MB/s)(1250MiB/10120msec) 00:21:30.938 slat (usec): min=17, max=55805, avg=2002.13, stdev=4624.00 00:21:30.938 clat (msec): min=42, max=256, avg=127.36, stdev=28.08 00:21:30.938 lat (msec): min=43, max=256, avg=129.37, stdev=28.54 00:21:30.938 clat percentiles (msec): 00:21:30.938 | 1.00th=[ 63], 5.00th=[ 69], 10.00th=[ 75], 20.00th=[ 109], 00:21:30.938 | 30.00th=[ 131], 40.00th=[ 134], 50.00th=[ 136], 60.00th=[ 138], 00:21:30.938 | 70.00th=[ 140], 80.00th=[ 144], 90.00th=[ 148], 95.00th=[ 155], 00:21:30.938 | 99.00th=[ 178], 99.50th=[ 207], 99.90th=[ 255], 99.95th=[ 257], 00:21:30.938 | 99.99th=[ 257] 00:21:30.938 bw ( KiB/s): min=99840, max=225280, per=7.00%, avg=126281.30, stdev=30383.61, samples=20 00:21:30.938 iops : min= 390, max= 880, avg=493.20, stdev=118.65, samples=20 00:21:30.938 lat (msec) : 50=0.24%, 100=18.52%, 250=81.12%, 500=0.12% 00:21:30.938 cpu : usr=0.29%, sys=1.52%, ctx=1283, majf=0, minf=4097 00:21:30.938 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:21:30.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.938 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:30.938 issued rwts: total=4999,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:30.938 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:30.938 job6: (groupid=0, jobs=1): err= 0: pid=79496: Mon Jul 22 18:28:40 2024 00:21:30.938 read: IOPS=558, BW=140MiB/s (147MB/s)(1404MiB/10047msec) 00:21:30.938 slat (usec): min=21, max=75379, avg=1776.81, stdev=4224.53 00:21:30.938 clat (msec): min=40, max=193, avg=112.58, stdev=30.18 00:21:30.938 lat (msec): min=45, max=213, avg=114.36, stdev=30.66 00:21:30.938 clat percentiles (msec): 00:21:30.938 | 1.00th=[ 62], 5.00th=[ 67], 10.00th=[ 70], 20.00th=[ 75], 00:21:30.938 | 30.00th=[ 87], 40.00th=[ 105], 50.00th=[ 129], 60.00th=[ 134], 00:21:30.938 | 70.00th=[ 136], 80.00th=[ 140], 90.00th=[ 144], 95.00th=[ 148], 00:21:30.938 | 99.00th=[ 165], 99.50th=[ 171], 99.90th=[ 190], 99.95th=[ 194], 00:21:30.938 | 99.99th=[ 194] 00:21:30.938 bw ( KiB/s): min=102605, max=225280, per=7.88%, avg=142078.80, stdev=41621.53, samples=20 00:21:30.938 iops : min= 400, max= 880, avg=554.90, stdev=162.62, samples=20 00:21:30.938 lat (msec) : 50=0.32%, 100=36.85%, 250=62.83% 00:21:30.938 cpu : usr=0.30%, sys=2.30%, ctx=1336, majf=0, minf=4097 00:21:30.938 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:21:30.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.938 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:30.938 issued rwts: total=5615,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:30.938 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:30.938 job7: (groupid=0, jobs=1): err= 0: pid=79497: Mon Jul 22 18:28:40 2024 00:21:30.938 read: IOPS=480, BW=120MiB/s (126MB/s)(1216MiB/10120msec) 00:21:30.938 slat (usec): min=17, max=40525, avg=2021.76, stdev=4506.92 00:21:30.938 clat (msec): min=30, max=261, avg=130.93, stdev=21.93 00:21:30.938 lat (msec): min=30, max=261, avg=132.95, stdev=22.33 00:21:30.938 clat percentiles (msec): 00:21:30.938 | 1.00th=[ 54], 5.00th=[ 88], 10.00th=[ 100], 20.00th=[ 121], 00:21:30.938 | 30.00th=[ 132], 40.00th=[ 134], 50.00th=[ 136], 60.00th=[ 138], 00:21:30.938 | 70.00th=[ 140], 80.00th=[ 144], 90.00th=[ 148], 95.00th=[ 155], 00:21:30.938 | 99.00th=[ 174], 99.50th=[ 205], 99.90th=[ 247], 99.95th=[ 247], 00:21:30.938 | 99.99th=[ 262] 00:21:30.938 bw ( KiB/s): min=100864, max=166220, per=6.81%, avg=122887.05, stdev=16985.88, samples=20 00:21:30.938 iops : min= 394, max= 649, avg=480.00, stdev=66.32, samples=20 00:21:30.938 lat (msec) : 50=0.88%, 100=9.52%, 250=89.58%, 500=0.02% 00:21:30.938 cpu : usr=0.23%, sys=1.90%, ctx=1185, majf=0, minf=4097 00:21:30.938 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:21:30.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.938 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:30.938 issued rwts: total=4865,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:30.938 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:30.938 job8: (groupid=0, jobs=1): err= 0: pid=79498: Mon Jul 22 18:28:40 2024 00:21:30.938 read: IOPS=473, BW=118MiB/s (124MB/s)(1199MiB/10120msec) 00:21:30.938 slat (usec): min=15, max=60927, avg=2060.37, stdev=4769.75 00:21:30.938 clat (msec): min=45, max=265, avg=132.85, stdev=19.22 00:21:30.938 lat (msec): min=45, max=265, avg=134.91, stdev=19.70 00:21:30.938 clat percentiles (msec): 00:21:30.938 | 1.00th=[ 74], 5.00th=[ 97], 10.00th=[ 105], 20.00th=[ 127], 00:21:30.938 | 30.00th=[ 132], 40.00th=[ 134], 50.00th=[ 136], 60.00th=[ 138], 00:21:30.938 | 70.00th=[ 140], 80.00th=[ 144], 90.00th=[ 150], 95.00th=[ 155], 00:21:30.938 | 99.00th=[ 176], 99.50th=[ 211], 99.90th=[ 245], 99.95th=[ 245], 00:21:30.938 | 99.99th=[ 266] 00:21:30.938 bw ( KiB/s): min=102912, max=154112, per=6.71%, avg=121040.30, stdev=13860.56, samples=20 00:21:30.938 iops : min= 402, max= 602, avg=472.80, stdev=54.15, samples=20 00:21:30.938 lat (msec) : 50=0.31%, 100=6.59%, 250=93.07%, 500=0.02% 00:21:30.938 cpu : usr=0.29%, sys=1.40%, ctx=1223, majf=0, minf=4097 00:21:30.938 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:21:30.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.938 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:30.938 issued rwts: total=4794,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:30.938 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:30.938 job9: (groupid=0, jobs=1): err= 0: pid=79499: Mon Jul 22 18:28:40 2024 00:21:30.938 read: IOPS=511, BW=128MiB/s (134MB/s)(1291MiB/10095msec) 00:21:30.939 slat (usec): min=15, max=84864, avg=1910.96, stdev=4377.61 00:21:30.939 clat (msec): min=9, max=202, avg=123.01, stdev=21.66 00:21:30.939 lat (msec): min=9, max=204, avg=124.92, stdev=22.08 00:21:30.939 clat percentiles (msec): 00:21:30.939 | 1.00th=[ 51], 5.00th=[ 86], 10.00th=[ 101], 20.00th=[ 105], 00:21:30.939 | 30.00th=[ 109], 40.00th=[ 117], 50.00th=[ 132], 60.00th=[ 136], 00:21:30.939 | 70.00th=[ 138], 80.00th=[ 140], 90.00th=[ 144], 95.00th=[ 148], 00:21:30.939 | 99.00th=[ 159], 99.50th=[ 167], 99.90th=[ 199], 99.95th=[ 203], 00:21:30.939 | 99.99th=[ 203] 00:21:30.939 bw ( KiB/s): min=111104, max=179864, per=7.24%, avg=130521.20, stdev=20799.65, samples=20 00:21:30.939 iops : min= 434, max= 702, avg=509.80, stdev=81.19, samples=20 00:21:30.939 lat (msec) : 10=0.02%, 20=0.23%, 50=0.76%, 100=9.18%, 250=89.81% 00:21:30.939 cpu : usr=0.21%, sys=2.01%, ctx=1270, majf=0, minf=4097 00:21:30.939 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:21:30.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.939 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:30.939 issued rwts: total=5164,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:30.939 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:30.939 job10: (groupid=0, jobs=1): err= 0: pid=79500: Mon Jul 22 18:28:40 2024 00:21:30.939 read: IOPS=1248, BW=312MiB/s (327MB/s)(3129MiB/10024msec) 00:21:30.939 slat (usec): min=16, max=36024, avg=787.74, stdev=1909.24 00:21:30.939 clat (msec): min=18, max=146, avg=50.39, stdev=21.25 00:21:30.939 lat (msec): min=23, max=147, avg=51.18, stdev=21.53 00:21:30.939 clat percentiles (msec): 00:21:30.939 | 1.00th=[ 33], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 37], 00:21:30.939 | 30.00th=[ 37], 40.00th=[ 38], 50.00th=[ 39], 60.00th=[ 40], 00:21:30.939 | 70.00th=[ 63], 80.00th=[ 70], 90.00th=[ 78], 95.00th=[ 89], 00:21:30.939 | 99.00th=[ 125], 99.50th=[ 129], 99.90th=[ 142], 99.95th=[ 146], 00:21:30.939 | 99.99th=[ 146] 00:21:30.939 bw ( KiB/s): min=145920, max=437760, per=17.66%, avg=318577.95, stdev=119875.85, samples=20 00:21:30.939 iops : min= 570, max= 1710, avg=1244.30, stdev=468.24, samples=20 00:21:30.939 lat (msec) : 20=0.01%, 50=67.59%, 100=28.50%, 250=3.91% 00:21:30.939 cpu : usr=0.47%, sys=3.46%, ctx=2720, majf=0, minf=4097 00:21:30.939 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:21:30.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.939 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:30.939 issued rwts: total=12514,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:30.939 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:30.939 00:21:30.939 Run status group 0 (all jobs): 00:21:30.939 READ: bw=1762MiB/s (1847MB/s), 118MiB/s-312MiB/s (124MB/s-327MB/s), io=17.4GiB (18.7GB), run=10018-10120msec 00:21:30.939 00:21:30.939 Disk stats (read/write): 00:21:30.939 nvme0n1: ios=12188/0, merge=0/0, ticks=1233462/0, in_queue=1233462, util=97.73% 00:21:30.939 nvme10n1: ios=9877/0, merge=0/0, ticks=1226785/0, in_queue=1226785, util=97.94% 00:21:30.939 nvme1n1: ios=17802/0, merge=0/0, ticks=1233929/0, in_queue=1233929, util=98.14% 00:21:30.939 nvme2n1: ios=15120/0, merge=0/0, ticks=1232271/0, in_queue=1232271, util=98.21% 00:21:30.939 nvme3n1: ios=11165/0, merge=0/0, ticks=1232660/0, in_queue=1232660, util=98.22% 00:21:30.939 nvme4n1: ios=9877/0, merge=0/0, ticks=1225796/0, in_queue=1225796, util=98.48% 00:21:30.939 nvme5n1: ios=11131/0, merge=0/0, ticks=1233521/0, in_queue=1233521, util=98.55% 00:21:30.939 nvme6n1: ios=9607/0, merge=0/0, ticks=1226418/0, in_queue=1226418, util=98.61% 00:21:30.939 nvme7n1: ios=9466/0, merge=0/0, ticks=1227552/0, in_queue=1227552, util=98.93% 00:21:30.939 nvme8n1: ios=10217/0, merge=0/0, ticks=1229018/0, in_queue=1229018, util=99.06% 00:21:30.939 nvme9n1: ios=24938/0, merge=0/0, ticks=1239869/0, in_queue=1239869, util=99.14% 00:21:30.939 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:21:30.939 [global] 00:21:30.939 thread=1 00:21:30.939 invalidate=1 00:21:30.939 rw=randwrite 00:21:30.939 time_based=1 00:21:30.939 runtime=10 00:21:30.939 ioengine=libaio 00:21:30.939 direct=1 00:21:30.939 bs=262144 00:21:30.939 iodepth=64 00:21:30.939 norandommap=1 00:21:30.939 numjobs=1 00:21:30.939 00:21:30.939 [job0] 00:21:30.939 filename=/dev/nvme0n1 00:21:30.939 [job1] 00:21:30.939 filename=/dev/nvme10n1 00:21:30.939 [job2] 00:21:30.939 filename=/dev/nvme1n1 00:21:30.939 [job3] 00:21:30.939 filename=/dev/nvme2n1 00:21:30.939 [job4] 00:21:30.939 filename=/dev/nvme3n1 00:21:30.939 [job5] 00:21:30.939 filename=/dev/nvme4n1 00:21:30.939 [job6] 00:21:30.939 filename=/dev/nvme5n1 00:21:30.939 [job7] 00:21:30.939 filename=/dev/nvme6n1 00:21:30.939 [job8] 00:21:30.939 filename=/dev/nvme7n1 00:21:30.939 [job9] 00:21:30.939 filename=/dev/nvme8n1 00:21:30.939 [job10] 00:21:30.939 filename=/dev/nvme9n1 00:21:30.939 Could not set queue depth (nvme0n1) 00:21:30.939 Could not set queue depth (nvme10n1) 00:21:30.939 Could not set queue depth (nvme1n1) 00:21:30.939 Could not set queue depth (nvme2n1) 00:21:30.939 Could not set queue depth (nvme3n1) 00:21:30.939 Could not set queue depth (nvme4n1) 00:21:30.939 Could not set queue depth (nvme5n1) 00:21:30.939 Could not set queue depth (nvme6n1) 00:21:30.939 Could not set queue depth (nvme7n1) 00:21:30.939 Could not set queue depth (nvme8n1) 00:21:30.939 Could not set queue depth (nvme9n1) 00:21:30.939 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:30.939 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:30.939 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:30.939 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:30.939 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:30.939 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:30.939 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:30.939 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:30.939 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:30.939 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:30.939 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:30.939 fio-3.35 00:21:30.939 Starting 11 threads 00:21:40.932 00:21:40.932 job0: (groupid=0, jobs=1): err= 0: pid=79695: Mon Jul 22 18:28:51 2024 00:21:40.932 write: IOPS=365, BW=91.3MiB/s (95.7MB/s)(929MiB/10177msec); 0 zone resets 00:21:40.932 slat (usec): min=21, max=28462, avg=2684.82, stdev=4685.15 00:21:40.932 clat (msec): min=22, max=371, avg=172.44, stdev=28.84 00:21:40.932 lat (msec): min=22, max=371, avg=175.13, stdev=28.86 00:21:40.932 clat percentiles (msec): 00:21:40.932 | 1.00th=[ 106], 5.00th=[ 144], 10.00th=[ 148], 20.00th=[ 153], 00:21:40.932 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 182], 00:21:40.932 | 70.00th=[ 192], 80.00th=[ 194], 90.00th=[ 201], 95.00th=[ 209], 00:21:40.932 | 99.00th=[ 259], 99.50th=[ 313], 99.90th=[ 359], 99.95th=[ 372], 00:21:40.932 | 99.99th=[ 372] 00:21:40.932 bw ( KiB/s): min=79553, max=108544, per=7.73%, avg=93526.45, stdev=11233.24, samples=20 00:21:40.932 iops : min= 310, max= 424, avg=365.30, stdev=43.93, samples=20 00:21:40.932 lat (msec) : 50=0.43%, 100=0.54%, 250=98.01%, 500=1.02% 00:21:40.932 cpu : usr=0.94%, sys=1.07%, ctx=5196, majf=0, minf=1 00:21:40.932 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:21:40.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:40.932 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:40.932 issued rwts: total=0,3717,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:40.932 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:40.932 job1: (groupid=0, jobs=1): err= 0: pid=79696: Mon Jul 22 18:28:51 2024 00:21:40.932 write: IOPS=397, BW=99.4MiB/s (104MB/s)(1009MiB/10154msec); 0 zone resets 00:21:40.932 slat (usec): min=20, max=25022, avg=2473.51, stdev=4284.63 00:21:40.932 clat (msec): min=27, max=301, avg=158.47, stdev=20.16 00:21:40.932 lat (msec): min=27, max=301, avg=160.94, stdev=19.97 00:21:40.932 clat percentiles (msec): 00:21:40.932 | 1.00th=[ 127], 5.00th=[ 142], 10.00th=[ 144], 20.00th=[ 148], 00:21:40.932 | 30.00th=[ 150], 40.00th=[ 153], 50.00th=[ 155], 60.00th=[ 157], 00:21:40.932 | 70.00th=[ 159], 80.00th=[ 169], 90.00th=[ 186], 95.00th=[ 192], 00:21:40.932 | 99.00th=[ 209], 99.50th=[ 253], 99.90th=[ 292], 99.95th=[ 292], 00:21:40.932 | 99.99th=[ 300] 00:21:40.932 bw ( KiB/s): min=83968, max=110592, per=8.40%, avg=101708.80, stdev=7923.65, samples=20 00:21:40.932 iops : min= 328, max= 432, avg=397.30, stdev=30.95, samples=20 00:21:40.932 lat (msec) : 50=0.40%, 100=0.40%, 250=98.66%, 500=0.55% 00:21:40.932 cpu : usr=1.06%, sys=1.09%, ctx=4183, majf=0, minf=1 00:21:40.932 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:21:40.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:40.932 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:40.932 issued rwts: total=0,4036,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:40.932 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:40.932 job2: (groupid=0, jobs=1): err= 0: pid=79703: Mon Jul 22 18:28:51 2024 00:21:40.932 write: IOPS=366, BW=91.6MiB/s (96.1MB/s)(933MiB/10181msec); 0 zone resets 00:21:40.932 slat (usec): min=21, max=27788, avg=2675.45, stdev=4658.20 00:21:40.932 clat (msec): min=18, max=372, avg=171.82, stdev=28.24 00:21:40.932 lat (msec): min=18, max=373, avg=174.50, stdev=28.23 00:21:40.932 clat percentiles (msec): 00:21:40.932 | 1.00th=[ 84], 5.00th=[ 144], 10.00th=[ 148], 20.00th=[ 155], 00:21:40.932 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 167], 60.00th=[ 182], 00:21:40.932 | 70.00th=[ 190], 80.00th=[ 192], 90.00th=[ 197], 95.00th=[ 201], 00:21:40.932 | 99.00th=[ 262], 99.50th=[ 313], 99.90th=[ 363], 99.95th=[ 372], 00:21:40.932 | 99.99th=[ 372] 00:21:40.932 bw ( KiB/s): min=79872, max=106496, per=7.76%, avg=93926.40, stdev=10247.37, samples=20 00:21:40.932 iops : min= 312, max= 416, avg=366.90, stdev=40.03, samples=20 00:21:40.932 lat (msec) : 20=0.11%, 50=0.54%, 100=0.64%, 250=97.70%, 500=1.02% 00:21:40.932 cpu : usr=0.83%, sys=1.08%, ctx=4116, majf=0, minf=1 00:21:40.932 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:21:40.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:40.932 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:40.932 issued rwts: total=0,3732,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:40.932 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:40.932 job3: (groupid=0, jobs=1): err= 0: pid=79709: Mon Jul 22 18:28:51 2024 00:21:40.932 write: IOPS=430, BW=108MiB/s (113MB/s)(1092MiB/10134msec); 0 zone resets 00:21:40.932 slat (usec): min=21, max=13317, avg=2284.75, stdev=3943.53 00:21:40.932 clat (msec): min=7, max=286, avg=146.15, stdev=21.03 00:21:40.932 lat (msec): min=7, max=286, avg=148.43, stdev=20.97 00:21:40.932 clat percentiles (msec): 00:21:40.932 | 1.00th=[ 73], 5.00th=[ 110], 10.00th=[ 112], 20.00th=[ 142], 00:21:40.932 | 30.00th=[ 146], 40.00th=[ 148], 50.00th=[ 150], 60.00th=[ 153], 00:21:40.932 | 70.00th=[ 155], 80.00th=[ 157], 90.00th=[ 159], 95.00th=[ 167], 00:21:40.932 | 99.00th=[ 186], 99.50th=[ 239], 99.90th=[ 275], 99.95th=[ 275], 00:21:40.932 | 99.99th=[ 288] 00:21:40.932 bw ( KiB/s): min=98304, max=145408, per=9.10%, avg=110156.80, stdev=12182.31, samples=20 00:21:40.932 iops : min= 384, max= 568, avg=430.30, stdev=47.59, samples=20 00:21:40.932 lat (msec) : 10=0.02%, 20=0.16%, 50=0.46%, 100=0.64%, 250=98.40% 00:21:40.932 lat (msec) : 500=0.32% 00:21:40.932 cpu : usr=0.87%, sys=1.34%, ctx=3968, majf=0, minf=1 00:21:40.932 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:21:40.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:40.932 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:40.932 issued rwts: total=0,4366,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:40.932 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:40.932 job4: (groupid=0, jobs=1): err= 0: pid=79711: Mon Jul 22 18:28:51 2024 00:21:40.932 write: IOPS=398, BW=99.5MiB/s (104MB/s)(1011MiB/10156msec); 0 zone resets 00:21:40.932 slat (usec): min=21, max=23997, avg=2469.90, stdev=4271.83 00:21:40.932 clat (msec): min=27, max=300, avg=158.21, stdev=19.57 00:21:40.932 lat (msec): min=28, max=300, avg=160.68, stdev=19.36 00:21:40.932 clat percentiles (msec): 00:21:40.932 | 1.00th=[ 124], 5.00th=[ 142], 10.00th=[ 144], 20.00th=[ 148], 00:21:40.932 | 30.00th=[ 150], 40.00th=[ 153], 50.00th=[ 155], 60.00th=[ 157], 00:21:40.932 | 70.00th=[ 159], 80.00th=[ 171], 90.00th=[ 186], 95.00th=[ 190], 00:21:40.932 | 99.00th=[ 209], 99.50th=[ 251], 99.90th=[ 292], 99.95th=[ 292], 00:21:40.932 | 99.99th=[ 300] 00:21:40.932 bw ( KiB/s): min=85504, max=109568, per=8.41%, avg=101862.40, stdev=7386.92, samples=20 00:21:40.932 iops : min= 334, max= 428, avg=397.90, stdev=28.86, samples=20 00:21:40.932 lat (msec) : 50=0.37%, 100=0.49%, 250=98.59%, 500=0.54% 00:21:40.932 cpu : usr=0.97%, sys=1.20%, ctx=3744, majf=0, minf=1 00:21:40.932 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:21:40.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:40.932 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:40.932 issued rwts: total=0,4043,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:40.932 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:40.932 job5: (groupid=0, jobs=1): err= 0: pid=79712: Mon Jul 22 18:28:51 2024 00:21:40.932 write: IOPS=408, BW=102MiB/s (107MB/s)(1036MiB/10133msec); 0 zone resets 00:21:40.932 slat (usec): min=17, max=107022, avg=2368.26, stdev=4417.92 00:21:40.932 clat (msec): min=35, max=283, avg=154.07, stdev=20.00 00:21:40.932 lat (msec): min=37, max=283, avg=156.44, stdev=19.90 00:21:40.932 clat percentiles (msec): 00:21:40.932 | 1.00th=[ 78], 5.00th=[ 140], 10.00th=[ 142], 20.00th=[ 146], 00:21:40.932 | 30.00th=[ 148], 40.00th=[ 150], 50.00th=[ 153], 60.00th=[ 155], 00:21:40.932 | 70.00th=[ 155], 80.00th=[ 159], 90.00th=[ 176], 95.00th=[ 188], 00:21:40.932 | 99.00th=[ 228], 99.50th=[ 262], 99.90th=[ 279], 99.95th=[ 284], 00:21:40.932 | 99.99th=[ 284] 00:21:40.932 bw ( KiB/s): min=67584, max=122368, per=8.63%, avg=104451.55, stdev=10514.60, samples=20 00:21:40.932 iops : min= 264, max= 478, avg=408.00, stdev=41.06, samples=20 00:21:40.932 lat (msec) : 50=0.39%, 100=1.11%, 250=97.80%, 500=0.70% 00:21:40.932 cpu : usr=0.96%, sys=1.19%, ctx=5264, majf=0, minf=1 00:21:40.932 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:21:40.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:40.932 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:40.932 issued rwts: total=0,4144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:40.932 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:40.932 job6: (groupid=0, jobs=1): err= 0: pid=79713: Mon Jul 22 18:28:51 2024 00:21:40.932 write: IOPS=396, BW=99.1MiB/s (104MB/s)(1007MiB/10158msec); 0 zone resets 00:21:40.932 slat (usec): min=20, max=46764, avg=2478.22, stdev=4311.95 00:21:40.932 clat (msec): min=50, max=306, avg=158.84, stdev=18.95 00:21:40.932 lat (msec): min=50, max=306, avg=161.32, stdev=18.69 00:21:40.932 clat percentiles (msec): 00:21:40.932 | 1.00th=[ 138], 5.00th=[ 142], 10.00th=[ 144], 20.00th=[ 148], 00:21:40.932 | 30.00th=[ 150], 40.00th=[ 153], 50.00th=[ 155], 60.00th=[ 157], 00:21:40.932 | 70.00th=[ 159], 80.00th=[ 171], 90.00th=[ 186], 95.00th=[ 190], 00:21:40.932 | 99.00th=[ 220], 99.50th=[ 257], 99.90th=[ 296], 99.95th=[ 296], 00:21:40.932 | 99.99th=[ 305] 00:21:40.932 bw ( KiB/s): min=79360, max=110592, per=8.38%, avg=101482.75, stdev=8330.80, samples=20 00:21:40.932 iops : min= 310, max= 432, avg=396.40, stdev=32.53, samples=20 00:21:40.932 lat (msec) : 100=0.47%, 250=98.98%, 500=0.55% 00:21:40.932 cpu : usr=0.90%, sys=1.19%, ctx=5427, majf=0, minf=1 00:21:40.932 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:21:40.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:40.932 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:40.932 issued rwts: total=0,4028,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:40.932 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:40.933 job7: (groupid=0, jobs=1): err= 0: pid=79714: Mon Jul 22 18:28:51 2024 00:21:40.933 write: IOPS=744, BW=186MiB/s (195MB/s)(1874MiB/10065msec); 0 zone resets 00:21:40.933 slat (usec): min=18, max=40480, avg=1329.52, stdev=2382.53 00:21:40.933 clat (msec): min=43, max=156, avg=84.60, stdev=22.85 00:21:40.933 lat (msec): min=43, max=156, avg=85.93, stdev=23.10 00:21:40.933 clat percentiles (msec): 00:21:40.933 | 1.00th=[ 63], 5.00th=[ 64], 10.00th=[ 64], 20.00th=[ 67], 00:21:40.933 | 30.00th=[ 68], 40.00th=[ 68], 50.00th=[ 69], 60.00th=[ 74], 00:21:40.933 | 70.00th=[ 109], 80.00th=[ 111], 90.00th=[ 116], 95.00th=[ 120], 00:21:40.933 | 99.00th=[ 138], 99.50th=[ 142], 99.90th=[ 150], 99.95th=[ 150], 00:21:40.933 | 99.99th=[ 157] 00:21:40.933 bw ( KiB/s): min=130048, max=245248, per=15.71%, avg=190233.60, stdev=50472.77, samples=20 00:21:40.933 iops : min= 508, max= 958, avg=743.10, stdev=197.16, samples=20 00:21:40.933 lat (msec) : 50=0.03%, 100=62.48%, 250=37.50% 00:21:40.933 cpu : usr=1.58%, sys=2.04%, ctx=8932, majf=0, minf=1 00:21:40.933 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:21:40.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:40.933 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:40.933 issued rwts: total=0,7494,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:40.933 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:40.933 job8: (groupid=0, jobs=1): err= 0: pid=79720: Mon Jul 22 18:28:51 2024 00:21:40.933 write: IOPS=364, BW=91.0MiB/s (95.4MB/s)(927MiB/10182msec); 0 zone resets 00:21:40.933 slat (usec): min=21, max=46763, avg=2693.47, stdev=4717.94 00:21:40.933 clat (msec): min=49, max=371, avg=173.02, stdev=26.54 00:21:40.933 lat (msec): min=49, max=371, avg=175.71, stdev=26.48 00:21:40.933 clat percentiles (msec): 00:21:40.933 | 1.00th=[ 129], 5.00th=[ 146], 10.00th=[ 148], 20.00th=[ 155], 00:21:40.933 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 167], 60.00th=[ 182], 00:21:40.933 | 70.00th=[ 190], 80.00th=[ 192], 90.00th=[ 199], 95.00th=[ 207], 00:21:40.933 | 99.00th=[ 262], 99.50th=[ 313], 99.90th=[ 359], 99.95th=[ 372], 00:21:40.933 | 99.99th=[ 372] 00:21:40.933 bw ( KiB/s): min=80384, max=108544, per=7.70%, avg=93270.25, stdev=10245.92, samples=20 00:21:40.933 iops : min= 314, max= 424, avg=364.30, stdev=40.02, samples=20 00:21:40.933 lat (msec) : 50=0.03%, 100=0.59%, 250=98.35%, 500=1.03% 00:21:40.933 cpu : usr=0.89%, sys=1.14%, ctx=5074, majf=0, minf=1 00:21:40.933 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:21:40.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:40.933 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:40.933 issued rwts: total=0,3707,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:40.933 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:40.933 job9: (groupid=0, jobs=1): err= 0: pid=79722: Mon Jul 22 18:28:51 2024 00:21:40.933 write: IOPS=430, BW=108MiB/s (113MB/s)(1092MiB/10140msec); 0 zone resets 00:21:40.933 slat (usec): min=21, max=13849, avg=2284.03, stdev=3940.30 00:21:40.933 clat (msec): min=11, max=290, avg=146.15, stdev=21.32 00:21:40.933 lat (msec): min=13, max=290, avg=148.44, stdev=21.26 00:21:40.933 clat percentiles (msec): 00:21:40.933 | 1.00th=[ 78], 5.00th=[ 110], 10.00th=[ 112], 20.00th=[ 142], 00:21:40.933 | 30.00th=[ 146], 40.00th=[ 148], 50.00th=[ 150], 60.00th=[ 153], 00:21:40.933 | 70.00th=[ 155], 80.00th=[ 157], 90.00th=[ 159], 95.00th=[ 167], 00:21:40.933 | 99.00th=[ 186], 99.50th=[ 243], 99.90th=[ 284], 99.95th=[ 284], 00:21:40.933 | 99.99th=[ 292] 00:21:40.933 bw ( KiB/s): min=98304, max=145699, per=9.11%, avg=110271.15, stdev=12348.67, samples=20 00:21:40.933 iops : min= 384, max= 569, avg=430.45, stdev=48.32, samples=20 00:21:40.933 lat (msec) : 20=0.23%, 50=0.37%, 100=0.73%, 250=98.26%, 500=0.41% 00:21:40.933 cpu : usr=1.15%, sys=1.06%, ctx=5276, majf=0, minf=1 00:21:40.933 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:21:40.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:40.933 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:40.933 issued rwts: total=0,4368,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:40.933 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:40.933 job10: (groupid=0, jobs=1): err= 0: pid=79723: Mon Jul 22 18:28:51 2024 00:21:40.933 write: IOPS=444, BW=111MiB/s (117MB/s)(1133MiB/10185msec); 0 zone resets 00:21:40.933 slat (usec): min=20, max=18427, avg=2180.61, stdev=3957.95 00:21:40.933 clat (msec): min=17, max=371, avg=141.63, stdev=42.66 00:21:40.933 lat (msec): min=17, max=371, avg=143.81, stdev=43.11 00:21:40.933 clat percentiles (msec): 00:21:40.933 | 1.00th=[ 73], 5.00th=[ 104], 10.00th=[ 106], 20.00th=[ 110], 00:21:40.933 | 30.00th=[ 111], 40.00th=[ 113], 50.00th=[ 118], 60.00th=[ 129], 00:21:40.933 | 70.00th=[ 182], 80.00th=[ 192], 90.00th=[ 194], 95.00th=[ 197], 00:21:40.933 | 99.00th=[ 236], 99.50th=[ 296], 99.90th=[ 359], 99.95th=[ 359], 00:21:40.933 | 99.99th=[ 372] 00:21:40.933 bw ( KiB/s): min=81920, max=149504, per=9.44%, avg=114329.60, stdev=29873.46, samples=20 00:21:40.933 iops : min= 320, max= 584, avg=446.60, stdev=116.69, samples=20 00:21:40.933 lat (msec) : 20=0.09%, 50=0.62%, 100=0.99%, 250=97.46%, 500=0.84% 00:21:40.933 cpu : usr=0.94%, sys=1.30%, ctx=5298, majf=0, minf=1 00:21:40.933 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:21:40.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:40.933 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:40.933 issued rwts: total=0,4530,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:40.933 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:40.933 00:21:40.933 Run status group 0 (all jobs): 00:21:40.933 WRITE: bw=1182MiB/s (1240MB/s), 91.0MiB/s-186MiB/s (95.4MB/s-195MB/s), io=11.8GiB (12.6GB), run=10065-10185msec 00:21:40.933 00:21:40.933 Disk stats (read/write): 00:21:40.933 nvme0n1: ios=50/7300, merge=0/0, ticks=73/1208998, in_queue=1209071, util=97.96% 00:21:40.933 nvme10n1: ios=49/7937, merge=0/0, ticks=90/1213039, in_queue=1213129, util=98.20% 00:21:40.933 nvme1n1: ios=48/7330, merge=0/0, ticks=83/1209732, in_queue=1209815, util=98.27% 00:21:40.933 nvme2n1: ios=41/8599, merge=0/0, ticks=57/1212581, in_queue=1212638, util=98.22% 00:21:40.933 nvme3n1: ios=40/7952, merge=0/0, ticks=49/1213196, in_queue=1213245, util=98.37% 00:21:40.933 nvme4n1: ios=0/8147, merge=0/0, ticks=0/1213071, in_queue=1213071, util=98.20% 00:21:40.933 nvme5n1: ios=0/7915, merge=0/0, ticks=0/1211941, in_queue=1211941, util=98.32% 00:21:40.933 nvme6n1: ios=0/14826, merge=0/0, ticks=0/1216186, in_queue=1216186, util=98.41% 00:21:40.933 nvme7n1: ios=0/7278, merge=0/0, ticks=0/1209744, in_queue=1209744, util=98.70% 00:21:40.933 nvme8n1: ios=0/8615, merge=0/0, ticks=0/1213839, in_queue=1213839, util=98.96% 00:21:40.933 nvme9n1: ios=0/8923, merge=0/0, ticks=0/1210437, in_queue=1210437, util=98.93% 00:21:40.933 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:21:40.933 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:21:40.933 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:40.933 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:40.933 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:40.933 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:21:40.933 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:21:40.933 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:40.933 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:21:40.933 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:40.933 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:21:40.933 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:21:40.933 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:40.933 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.933 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:40.933 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.933 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:40.933 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:21:40.933 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:21:40.933 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:21:40.933 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:21:40.933 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:40.933 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:21:40.933 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:40.933 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:21:40.933 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:21:40.933 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:40.933 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.933 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:40.933 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.933 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:40.933 18:28:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:21:40.933 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:21:40.933 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:21:40.933 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:21:40.933 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:21:40.934 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:21:40.934 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:21:40.934 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:21:40.934 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:21:40.934 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:21:40.934 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:21:40.934 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:21:40.934 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:40.935 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:21:40.935 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:21:40.935 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:21:40.935 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.935 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:40.935 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.935 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:40.935 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:21:40.935 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:21:40.935 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:21:40.935 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:21:40.935 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:40.935 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:21:40.935 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:40.935 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:21:40.935 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:21:40.935 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:21:40.935 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.935 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:40.935 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.935 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:21:40.935 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:21:40.935 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:21:40.935 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:40.935 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:21:40.935 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:40.935 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:21:40.935 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:40.935 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:40.935 rmmod nvme_tcp 00:21:40.935 rmmod nvme_fabrics 00:21:40.935 rmmod nvme_keyring 00:21:40.935 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:40.935 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:21:40.935 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:21:40.935 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 79035 ']' 00:21:40.935 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 79035 00:21:40.935 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@948 -- # '[' -z 79035 ']' 00:21:40.935 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@952 -- # kill -0 79035 00:21:40.935 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@953 -- # uname 00:21:40.935 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:40.935 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79035 00:21:40.935 killing process with pid 79035 00:21:40.935 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:40.935 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:40.935 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79035' 00:21:40.935 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@967 -- # kill 79035 00:21:40.935 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # wait 79035 00:21:44.218 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:44.218 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:44.218 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:44.218 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:44.218 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:44.218 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:44.218 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:44.218 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:44.218 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:44.218 00:21:44.218 real 0m52.453s 00:21:44.218 user 2m55.255s 00:21:44.218 sys 0m31.454s 00:21:44.218 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:44.218 ************************************ 00:21:44.218 END TEST nvmf_multiconnection 00:21:44.218 ************************************ 00:21:44.218 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:44.218 18:28:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:21:44.218 18:28:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:21:44.218 18:28:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:44.218 18:28:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:44.218 18:28:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:44.218 ************************************ 00:21:44.218 START TEST nvmf_initiator_timeout 00:21:44.218 ************************************ 00:21:44.218 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:21:44.218 * Looking for test storage... 00:21:44.218 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:44.218 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:44.218 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:21:44.218 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:44.218 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:44.218 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:44.218 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:44.218 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:44.218 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:44.218 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:44.218 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:44.218 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:44.218 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:44.218 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:21:44.218 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=1e224894-a0fc-4112-b81b-a37606f50c96 00:21:44.218 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:44.218 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:44.218 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:44.219 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:44.219 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:44.219 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:44.219 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:44.219 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:44.219 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.219 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.219 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.219 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:21:44.219 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.219 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:21:44.219 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:44.219 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:44.219 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:44.219 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:44.219 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:44.219 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:44.219 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:44.219 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:44.219 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:44.219 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:44.219 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:21:44.219 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:44.219 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:44.219 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:44.219 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:44.219 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:44.219 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:44.219 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:44.219 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:44.219 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:44.219 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:44.219 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:44.219 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:44.219 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:44.219 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:44.219 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:44.219 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:44.219 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:44.219 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:44.219 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:44.219 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:44.219 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:44.219 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:44.219 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:44.219 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:44.219 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:44.219 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:44.219 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:44.219 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:44.477 Cannot find device "nvmf_tgt_br" 00:21:44.477 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # true 00:21:44.477 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:44.477 Cannot find device "nvmf_tgt_br2" 00:21:44.478 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # true 00:21:44.478 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:44.478 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:44.478 Cannot find device "nvmf_tgt_br" 00:21:44.478 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # true 00:21:44.478 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:44.478 Cannot find device "nvmf_tgt_br2" 00:21:44.478 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # true 00:21:44.478 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:44.478 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:44.478 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:44.478 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:44.478 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # true 00:21:44.478 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:44.478 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:44.478 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # true 00:21:44.478 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:44.478 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:44.478 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:44.478 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:44.478 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:44.478 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:44.478 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:44.478 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:44.478 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:44.478 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:44.478 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:44.478 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:44.478 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:44.478 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:44.478 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:44.478 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:44.478 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:44.478 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:44.478 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:44.478 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:44.737 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:44.737 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:44.737 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:44.737 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:44.737 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:44.737 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:21:44.737 00:21:44.737 --- 10.0.0.2 ping statistics --- 00:21:44.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:44.737 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:21:44.737 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:44.737 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:44.737 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:21:44.737 00:21:44.737 --- 10.0.0.3 ping statistics --- 00:21:44.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:44.737 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:21:44.737 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:44.737 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:44.737 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:21:44.737 00:21:44.737 --- 10.0.0.1 ping statistics --- 00:21:44.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:44.737 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:21:44.737 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:44.737 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@433 -- # return 0 00:21:44.737 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:44.737 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:44.737 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:44.737 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:44.737 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:44.737 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:44.737 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:44.737 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:21:44.737 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:44.737 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:44.737 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:44.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:44.737 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=80120 00:21:44.737 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 80120 00:21:44.737 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:44.737 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@829 -- # '[' -z 80120 ']' 00:21:44.737 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:44.737 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:44.737 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:44.737 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:44.737 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:44.737 [2024-07-22 18:28:56.670786] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:21:44.737 [2024-07-22 18:28:56.670939] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:45.028 [2024-07-22 18:28:56.842024] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:45.288 [2024-07-22 18:28:57.133968] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:45.288 [2024-07-22 18:28:57.134308] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:45.288 [2024-07-22 18:28:57.134476] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:45.288 [2024-07-22 18:28:57.134642] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:45.288 [2024-07-22 18:28:57.134702] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:45.288 [2024-07-22 18:28:57.135025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:45.288 [2024-07-22 18:28:57.135142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:45.288 [2024-07-22 18:28:57.135241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:45.288 [2024-07-22 18:28:57.135267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:45.546 [2024-07-22 18:28:57.341957] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:45.804 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:45.804 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # return 0 00:21:45.804 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:45.804 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:45.804 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:45.804 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:45.804 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:21:45.804 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:45.804 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.805 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:45.805 Malloc0 00:21:45.805 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.805 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:21:45.805 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.805 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:45.805 Delay0 00:21:45.805 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.805 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:45.805 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.805 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:45.805 [2024-07-22 18:28:57.729624] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:45.805 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.805 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:21:45.805 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.805 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:45.805 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.805 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:45.805 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.805 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:45.805 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.805 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:45.805 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.805 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:45.805 [2024-07-22 18:28:57.761839] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:45.805 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.805 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid=1e224894-a0fc-4112-b81b-a37606f50c96 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:46.062 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:21:46.062 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:21:46.062 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:21:46.062 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:21:46.062 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:21:47.958 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:21:47.958 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:21:47.958 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:21:47.958 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:21:47.958 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:21:47.958 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:21:47.958 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=80179 00:21:47.958 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:21:47.958 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:21:47.958 [global] 00:21:47.958 thread=1 00:21:47.958 invalidate=1 00:21:47.958 rw=write 00:21:47.958 time_based=1 00:21:47.958 runtime=60 00:21:47.958 ioengine=libaio 00:21:47.958 direct=1 00:21:47.958 bs=4096 00:21:47.958 iodepth=1 00:21:47.958 norandommap=0 00:21:47.958 numjobs=1 00:21:47.958 00:21:47.958 verify_dump=1 00:21:47.958 verify_backlog=512 00:21:47.958 verify_state_save=0 00:21:47.958 do_verify=1 00:21:47.958 verify=crc32c-intel 00:21:47.958 [job0] 00:21:47.958 filename=/dev/nvme0n1 00:21:47.958 Could not set queue depth (nvme0n1) 00:21:48.215 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:48.215 fio-3.35 00:21:48.215 Starting 1 thread 00:21:51.515 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:21:51.515 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.515 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:51.515 true 00:21:51.515 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.515 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:21:51.515 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.515 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:51.515 true 00:21:51.515 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.515 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:21:51.515 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.515 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:51.515 true 00:21:51.515 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.515 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:21:51.515 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.515 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:51.515 true 00:21:51.515 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.515 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:21:54.042 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:21:54.042 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.042 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:54.042 true 00:21:54.042 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.042 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:21:54.042 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.043 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:54.043 true 00:21:54.043 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.043 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:21:54.043 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.043 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:54.043 true 00:21:54.043 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.043 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:21:54.043 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.043 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:54.043 true 00:21:54.043 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.043 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:21:54.043 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 80179 00:22:50.254 00:22:50.254 job0: (groupid=0, jobs=1): err= 0: pid=80204: Mon Jul 22 18:30:00 2024 00:22:50.254 read: IOPS=682, BW=2731KiB/s (2796kB/s)(160MiB/60000msec) 00:22:50.254 slat (usec): min=12, max=12551, avg=16.39, stdev=70.40 00:22:50.254 clat (usec): min=207, max=40376k, avg=1233.14, stdev=199500.97 00:22:50.254 lat (usec): min=220, max=40376k, avg=1249.52, stdev=199501.05 00:22:50.254 clat percentiles (usec): 00:22:50.254 | 1.00th=[ 219], 5.00th=[ 223], 10.00th=[ 227], 20.00th=[ 231], 00:22:50.254 | 30.00th=[ 235], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 247], 00:22:50.254 | 70.00th=[ 253], 80.00th=[ 260], 90.00th=[ 269], 95.00th=[ 285], 00:22:50.254 | 99.00th=[ 347], 99.50th=[ 371], 99.90th=[ 445], 99.95th=[ 494], 00:22:50.254 | 99.99th=[ 988] 00:22:50.254 write: IOPS=691, BW=2764KiB/s (2831kB/s)(162MiB/60000msec); 0 zone resets 00:22:50.254 slat (usec): min=15, max=859, avg=23.57, stdev= 8.09 00:22:50.254 clat (usec): min=146, max=1939, avg=185.21, stdev=31.59 00:22:50.254 lat (usec): min=168, max=1974, avg=208.78, stdev=34.22 00:22:50.254 clat percentiles (usec): 00:22:50.254 | 1.00th=[ 157], 5.00th=[ 161], 10.00th=[ 163], 20.00th=[ 167], 00:22:50.254 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 182], 60.00th=[ 186], 00:22:50.254 | 70.00th=[ 192], 80.00th=[ 200], 90.00th=[ 212], 95.00th=[ 225], 00:22:50.254 | 99.00th=[ 265], 99.50th=[ 293], 99.90th=[ 392], 99.95th=[ 553], 00:22:50.254 | 99.99th=[ 1500] 00:22:50.254 bw ( KiB/s): min= 4096, max= 9320, per=100.00%, avg=8297.03, stdev=1003.29, samples=39 00:22:50.254 iops : min= 1024, max= 2330, avg=2074.26, stdev=250.82, samples=39 00:22:50.254 lat (usec) : 250=82.26%, 500=17.69%, 750=0.03%, 1000=0.01% 00:22:50.254 lat (msec) : 2=0.01%, 4=0.01%, >=2000=0.01% 00:22:50.254 cpu : usr=0.59%, sys=2.06%, ctx=82435, majf=0, minf=2 00:22:50.254 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:50.254 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.254 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.254 issued rwts: total=40960,41467,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:50.254 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:50.254 00:22:50.254 Run status group 0 (all jobs): 00:22:50.254 READ: bw=2731KiB/s (2796kB/s), 2731KiB/s-2731KiB/s (2796kB/s-2796kB/s), io=160MiB (168MB), run=60000-60000msec 00:22:50.254 WRITE: bw=2764KiB/s (2831kB/s), 2764KiB/s-2764KiB/s (2831kB/s-2831kB/s), io=162MiB (170MB), run=60000-60000msec 00:22:50.254 00:22:50.254 Disk stats (read/write): 00:22:50.254 nvme0n1: ios=41210/40979, merge=0/0, ticks=10428/7992, in_queue=18420, util=99.53% 00:22:50.254 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:50.254 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:50.254 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:22:50.254 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:22:50.254 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:22:50.254 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:50.254 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:22:50.254 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:50.254 nvmf hotplug test: fio successful as expected 00:22:50.254 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:22:50.254 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:22:50.254 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:22:50.254 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:50.254 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.254 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:50.254 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.255 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:22:50.255 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:22:50.255 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:22:50.255 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:50.255 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:22:50.255 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:50.255 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:22:50.255 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:50.255 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:50.255 rmmod nvme_tcp 00:22:50.255 rmmod nvme_fabrics 00:22:50.255 rmmod nvme_keyring 00:22:50.255 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:50.255 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:22:50.255 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:22:50.255 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 80120 ']' 00:22:50.255 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 80120 00:22:50.255 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@948 -- # '[' -z 80120 ']' 00:22:50.255 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # kill -0 80120 00:22:50.255 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # uname 00:22:50.255 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:50.255 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80120 00:22:50.255 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:50.255 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:50.255 killing process with pid 80120 00:22:50.255 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80120' 00:22:50.255 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@967 -- # kill 80120 00:22:50.255 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # wait 80120 00:22:50.255 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:50.255 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:50.255 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:50.255 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:50.255 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:50.255 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:50.255 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:50.255 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.255 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:50.255 00:22:50.255 real 1m5.709s 00:22:50.255 user 3m56.809s 00:22:50.255 sys 0m20.257s 00:22:50.255 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:50.255 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:50.255 ************************************ 00:22:50.255 END TEST nvmf_initiator_timeout 00:22:50.255 ************************************ 00:22:50.255 18:30:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:22:50.255 18:30:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ virt == phy ]] 00:22:50.255 18:30:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:22:50.255 ************************************ 00:22:50.255 END TEST nvmf_target_extra 00:22:50.255 ************************************ 00:22:50.255 00:22:50.255 real 7m9.640s 00:22:50.255 user 17m30.418s 00:22:50.255 sys 1m53.869s 00:22:50.255 18:30:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:50.255 18:30:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:50.255 18:30:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:50.255 18:30:01 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:50.255 18:30:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:50.255 18:30:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:50.255 18:30:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:50.255 ************************************ 00:22:50.255 START TEST nvmf_host 00:22:50.255 ************************************ 00:22:50.255 18:30:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:50.255 * Looking for test storage... 00:22:50.255 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:22:50.255 18:30:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:50.255 18:30:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:22:50.255 18:30:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:50.255 18:30:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:50.255 18:30:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:50.255 18:30:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:50.255 18:30:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:50.255 18:30:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:50.255 18:30:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:50.255 18:30:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:50.255 18:30:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:50.255 18:30:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:50.255 18:30:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:22:50.255 18:30:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=1e224894-a0fc-4112-b81b-a37606f50c96 00:22:50.255 18:30:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:50.255 18:30:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:50.255 18:30:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:50.255 18:30:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:50.255 18:30:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:50.255 18:30:02 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:50.255 18:30:02 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:50.255 18:30:02 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:50.255 18:30:02 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.255 18:30:02 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.255 18:30:02 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.255 18:30:02 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:22:50.255 18:30:02 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.255 18:30:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:22:50.255 18:30:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:50.255 18:30:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:50.255 18:30:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:50.255 18:30:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:50.255 18:30:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:50.255 18:30:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:50.255 18:30:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:50.255 18:30:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.256 ************************************ 00:22:50.256 START TEST nvmf_identify 00:22:50.256 ************************************ 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:50.256 * Looking for test storage... 00:22:50.256 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=1e224894-a0fc-4112-b81b-a37606f50c96 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:50.256 Cannot find device "nvmf_tgt_br" 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # true 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:50.256 Cannot find device "nvmf_tgt_br2" 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # true 00:22:50.256 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:50.257 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:50.257 Cannot find device "nvmf_tgt_br" 00:22:50.257 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # true 00:22:50.257 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:50.257 Cannot find device "nvmf_tgt_br2" 00:22:50.257 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # true 00:22:50.257 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:50.257 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:50.257 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:50.257 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:50.257 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:22:50.257 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:50.257 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:50.257 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:22:50.257 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:50.515 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:50.515 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:50.515 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:50.515 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:50.515 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:50.515 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:50.515 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:50.515 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:50.515 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:50.515 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:50.515 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:50.515 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:50.515 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:50.515 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:50.515 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:50.515 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:50.515 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:50.515 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:50.515 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:50.515 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:50.515 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:50.515 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:50.515 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:50.515 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:50.515 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:22:50.515 00:22:50.515 --- 10.0.0.2 ping statistics --- 00:22:50.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.515 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:22:50.515 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:50.515 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:50.515 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:22:50.515 00:22:50.515 --- 10.0.0.3 ping statistics --- 00:22:50.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.515 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:22:50.515 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:50.515 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:50.515 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:22:50.515 00:22:50.515 --- 10.0.0.1 ping statistics --- 00:22:50.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.515 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:22:50.515 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:50.515 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:22:50.515 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:50.515 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:50.515 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:50.515 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:50.515 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:50.515 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:50.515 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:50.515 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:50.516 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:50.516 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:50.516 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=81055 00:22:50.516 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:50.516 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:50.516 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 81055 00:22:50.516 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 81055 ']' 00:22:50.516 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:50.516 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:50.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:50.516 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:50.516 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:50.516 18:30:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:50.774 [2024-07-22 18:30:02.607385] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:22:50.774 [2024-07-22 18:30:02.607566] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:50.774 [2024-07-22 18:30:02.780515] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:51.032 [2024-07-22 18:30:03.030260] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:51.032 [2024-07-22 18:30:03.030334] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:51.032 [2024-07-22 18:30:03.030351] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:51.032 [2024-07-22 18:30:03.030367] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:51.032 [2024-07-22 18:30:03.030383] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:51.032 [2024-07-22 18:30:03.030619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:51.032 [2024-07-22 18:30:03.030865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:51.032 [2024-07-22 18:30:03.031455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:51.032 [2024-07-22 18:30:03.031470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:51.291 [2024-07-22 18:30:03.239285] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:51.549 18:30:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:51.549 18:30:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:22:51.549 18:30:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:51.549 18:30:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.549 18:30:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:51.549 [2024-07-22 18:30:03.538609] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:51.549 18:30:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.549 18:30:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:51.549 18:30:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:51.549 18:30:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:51.807 18:30:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:51.807 18:30:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.807 18:30:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:51.807 Malloc0 00:22:51.807 18:30:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.807 18:30:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:51.807 18:30:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.807 18:30:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:51.807 18:30:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.807 18:30:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:51.807 18:30:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.807 18:30:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:51.807 18:30:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.807 18:30:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:51.807 18:30:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.807 18:30:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:51.807 [2024-07-22 18:30:03.688054] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:51.807 18:30:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.807 18:30:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:51.807 18:30:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.807 18:30:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:51.807 18:30:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.807 18:30:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:51.807 18:30:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.807 18:30:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:51.807 [ 00:22:51.807 { 00:22:51.807 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:51.807 "subtype": "Discovery", 00:22:51.807 "listen_addresses": [ 00:22:51.807 { 00:22:51.807 "trtype": "TCP", 00:22:51.807 "adrfam": "IPv4", 00:22:51.807 "traddr": "10.0.0.2", 00:22:51.807 "trsvcid": "4420" 00:22:51.807 } 00:22:51.807 ], 00:22:51.807 "allow_any_host": true, 00:22:51.807 "hosts": [] 00:22:51.807 }, 00:22:51.807 { 00:22:51.807 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.807 "subtype": "NVMe", 00:22:51.807 "listen_addresses": [ 00:22:51.807 { 00:22:51.807 "trtype": "TCP", 00:22:51.807 "adrfam": "IPv4", 00:22:51.807 "traddr": "10.0.0.2", 00:22:51.807 "trsvcid": "4420" 00:22:51.807 } 00:22:51.807 ], 00:22:51.807 "allow_any_host": true, 00:22:51.807 "hosts": [], 00:22:51.807 "serial_number": "SPDK00000000000001", 00:22:51.807 "model_number": "SPDK bdev Controller", 00:22:51.807 "max_namespaces": 32, 00:22:51.807 "min_cntlid": 1, 00:22:51.807 "max_cntlid": 65519, 00:22:51.807 "namespaces": [ 00:22:51.807 { 00:22:51.807 "nsid": 1, 00:22:51.807 "bdev_name": "Malloc0", 00:22:51.807 "name": "Malloc0", 00:22:51.807 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:51.807 "eui64": "ABCDEF0123456789", 00:22:51.807 "uuid": "7238eee1-3a07-4939-8419-883b84039bbe" 00:22:51.807 } 00:22:51.807 ] 00:22:51.807 } 00:22:51.807 ] 00:22:51.807 18:30:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.807 18:30:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:51.807 [2024-07-22 18:30:03.761742] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:22:51.807 [2024-07-22 18:30:03.761867] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81090 ] 00:22:52.069 [2024-07-22 18:30:03.929624] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:22:52.069 [2024-07-22 18:30:03.929774] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:52.069 [2024-07-22 18:30:03.929790] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:52.069 [2024-07-22 18:30:03.929816] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:52.069 [2024-07-22 18:30:03.929833] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:52.069 [2024-07-22 18:30:03.930036] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:22:52.069 [2024-07-22 18:30:03.930109] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x61500000f080 0 00:22:52.069 [2024-07-22 18:30:03.936276] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:52.069 [2024-07-22 18:30:03.936325] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:52.069 [2024-07-22 18:30:03.936337] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:52.069 [2024-07-22 18:30:03.936351] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:52.069 [2024-07-22 18:30:03.936461] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.069 [2024-07-22 18:30:03.936482] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.069 [2024-07-22 18:30:03.936492] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:52.069 [2024-07-22 18:30:03.936523] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:52.069 [2024-07-22 18:30:03.936571] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:52.069 [2024-07-22 18:30:03.944236] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.069 [2024-07-22 18:30:03.944270] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.069 [2024-07-22 18:30:03.944279] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.069 [2024-07-22 18:30:03.944289] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:52.069 [2024-07-22 18:30:03.944323] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:52.069 [2024-07-22 18:30:03.944348] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:22:52.069 [2024-07-22 18:30:03.944360] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:22:52.069 [2024-07-22 18:30:03.944385] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.069 [2024-07-22 18:30:03.944395] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.069 [2024-07-22 18:30:03.944403] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:52.069 [2024-07-22 18:30:03.944421] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.069 [2024-07-22 18:30:03.944459] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:52.069 [2024-07-22 18:30:03.944552] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.069 [2024-07-22 18:30:03.944569] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.069 [2024-07-22 18:30:03.944577] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.069 [2024-07-22 18:30:03.944596] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:52.069 [2024-07-22 18:30:03.944618] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:22:52.070 [2024-07-22 18:30:03.944633] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:22:52.070 [2024-07-22 18:30:03.944648] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.070 [2024-07-22 18:30:03.944656] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.070 [2024-07-22 18:30:03.944664] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:52.070 [2024-07-22 18:30:03.944687] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.070 [2024-07-22 18:30:03.944724] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:52.070 [2024-07-22 18:30:03.944800] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.070 [2024-07-22 18:30:03.944813] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.070 [2024-07-22 18:30:03.944820] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.070 [2024-07-22 18:30:03.944827] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:52.070 [2024-07-22 18:30:03.944838] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:22:52.070 [2024-07-22 18:30:03.944861] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:22:52.070 [2024-07-22 18:30:03.944876] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.070 [2024-07-22 18:30:03.944885] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.070 [2024-07-22 18:30:03.944893] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:52.070 [2024-07-22 18:30:03.944907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.070 [2024-07-22 18:30:03.944937] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:52.070 [2024-07-22 18:30:03.945022] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.070 [2024-07-22 18:30:03.945035] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.070 [2024-07-22 18:30:03.945042] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.070 [2024-07-22 18:30:03.945049] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:52.070 [2024-07-22 18:30:03.945061] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:52.070 [2024-07-22 18:30:03.945079] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.070 [2024-07-22 18:30:03.945088] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.070 [2024-07-22 18:30:03.945100] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:52.070 [2024-07-22 18:30:03.945115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.070 [2024-07-22 18:30:03.945143] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:52.070 [2024-07-22 18:30:03.945230] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.070 [2024-07-22 18:30:03.945244] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.070 [2024-07-22 18:30:03.945251] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.070 [2024-07-22 18:30:03.945258] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:52.070 [2024-07-22 18:30:03.945276] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:22:52.070 [2024-07-22 18:30:03.945288] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:22:52.070 [2024-07-22 18:30:03.945302] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:52.070 [2024-07-22 18:30:03.945412] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:22:52.070 [2024-07-22 18:30:03.945421] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:52.070 [2024-07-22 18:30:03.945445] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.070 [2024-07-22 18:30:03.945454] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.070 [2024-07-22 18:30:03.945466] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:52.070 [2024-07-22 18:30:03.945480] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.070 [2024-07-22 18:30:03.945511] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:52.070 [2024-07-22 18:30:03.945590] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.070 [2024-07-22 18:30:03.945603] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.070 [2024-07-22 18:30:03.945609] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.070 [2024-07-22 18:30:03.945616] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:52.070 [2024-07-22 18:30:03.945627] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:52.070 [2024-07-22 18:30:03.945645] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.070 [2024-07-22 18:30:03.945654] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.070 [2024-07-22 18:30:03.945661] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:52.070 [2024-07-22 18:30:03.945676] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.070 [2024-07-22 18:30:03.945703] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:52.070 [2024-07-22 18:30:03.945780] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.070 [2024-07-22 18:30:03.945792] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.070 [2024-07-22 18:30:03.945799] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.070 [2024-07-22 18:30:03.945806] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:52.070 [2024-07-22 18:30:03.945815] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:52.070 [2024-07-22 18:30:03.945828] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:22:52.070 [2024-07-22 18:30:03.945853] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:22:52.070 [2024-07-22 18:30:03.945874] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:22:52.070 [2024-07-22 18:30:03.945901] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.070 [2024-07-22 18:30:03.945911] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:52.070 [2024-07-22 18:30:03.945927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.070 [2024-07-22 18:30:03.945972] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:52.070 [2024-07-22 18:30:03.946093] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:52.070 [2024-07-22 18:30:03.946105] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:52.070 [2024-07-22 18:30:03.946112] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:52.070 [2024-07-22 18:30:03.946119] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=0 00:22:52.070 [2024-07-22 18:30:03.946129] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:22:52.070 [2024-07-22 18:30:03.946137] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.070 [2024-07-22 18:30:03.946162] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:52.070 [2024-07-22 18:30:03.946171] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:52.070 [2024-07-22 18:30:03.946189] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.070 [2024-07-22 18:30:03.946200] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.070 [2024-07-22 18:30:03.946220] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.070 [2024-07-22 18:30:03.946229] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:52.070 [2024-07-22 18:30:03.946248] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:22:52.070 [2024-07-22 18:30:03.946258] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:22:52.070 [2024-07-22 18:30:03.946266] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:22:52.070 [2024-07-22 18:30:03.946280] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:22:52.070 [2024-07-22 18:30:03.946289] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:22:52.070 [2024-07-22 18:30:03.946299] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:22:52.070 [2024-07-22 18:30:03.946314] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:22:52.070 [2024-07-22 18:30:03.946332] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.070 [2024-07-22 18:30:03.946341] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.070 [2024-07-22 18:30:03.946349] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:52.070 [2024-07-22 18:30:03.946379] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:52.070 [2024-07-22 18:30:03.946412] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:52.070 [2024-07-22 18:30:03.946495] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.070 [2024-07-22 18:30:03.946508] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.070 [2024-07-22 18:30:03.946518] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.070 [2024-07-22 18:30:03.946526] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:52.070 [2024-07-22 18:30:03.946547] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.070 [2024-07-22 18:30:03.946558] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.070 [2024-07-22 18:30:03.946566] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:52.071 [2024-07-22 18:30:03.946584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.071 [2024-07-22 18:30:03.946595] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.071 [2024-07-22 18:30:03.946603] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.071 [2024-07-22 18:30:03.946609] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x61500000f080) 00:22:52.071 [2024-07-22 18:30:03.946626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.071 [2024-07-22 18:30:03.946637] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.071 [2024-07-22 18:30:03.946644] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.071 [2024-07-22 18:30:03.946654] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x61500000f080) 00:22:52.071 [2024-07-22 18:30:03.946665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.071 [2024-07-22 18:30:03.946675] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.071 [2024-07-22 18:30:03.946682] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.071 [2024-07-22 18:30:03.946689] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:52.071 [2024-07-22 18:30:03.946699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.071 [2024-07-22 18:30:03.946708] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:22:52.071 [2024-07-22 18:30:03.946731] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:52.071 [2024-07-22 18:30:03.946743] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.071 [2024-07-22 18:30:03.946752] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:22:52.071 [2024-07-22 18:30:03.946765] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.071 [2024-07-22 18:30:03.946796] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:52.071 [2024-07-22 18:30:03.946808] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:22:52.071 [2024-07-22 18:30:03.946821] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:22:52.071 [2024-07-22 18:30:03.946829] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:52.071 [2024-07-22 18:30:03.946837] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:22:52.071 [2024-07-22 18:30:03.946961] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.071 [2024-07-22 18:30:03.946973] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.071 [2024-07-22 18:30:03.946980] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.071 [2024-07-22 18:30:03.946987] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:22:52.071 [2024-07-22 18:30:03.946997] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:22:52.071 [2024-07-22 18:30:03.947008] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:22:52.071 [2024-07-22 18:30:03.947031] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.071 [2024-07-22 18:30:03.947041] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:22:52.071 [2024-07-22 18:30:03.947067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.071 [2024-07-22 18:30:03.947103] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:22:52.071 [2024-07-22 18:30:03.947195] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:52.071 [2024-07-22 18:30:03.947223] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:52.071 [2024-07-22 18:30:03.947232] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:52.071 [2024-07-22 18:30:03.947240] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:22:52.071 [2024-07-22 18:30:03.947260] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:22:52.071 [2024-07-22 18:30:03.947268] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.071 [2024-07-22 18:30:03.947286] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:52.071 [2024-07-22 18:30:03.947295] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:52.071 [2024-07-22 18:30:03.947310] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.071 [2024-07-22 18:30:03.947320] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.071 [2024-07-22 18:30:03.947326] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.071 [2024-07-22 18:30:03.947338] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:22:52.071 [2024-07-22 18:30:03.947365] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:22:52.071 [2024-07-22 18:30:03.947426] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.071 [2024-07-22 18:30:03.947439] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:22:52.071 [2024-07-22 18:30:03.947469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.071 [2024-07-22 18:30:03.947483] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.071 [2024-07-22 18:30:03.947491] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.071 [2024-07-22 18:30:03.947499] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:22:52.071 [2024-07-22 18:30:03.947514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.071 [2024-07-22 18:30:03.947549] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:22:52.071 [2024-07-22 18:30:03.947567] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:22:52.071 [2024-07-22 18:30:03.947841] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:52.071 [2024-07-22 18:30:03.947869] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:52.071 [2024-07-22 18:30:03.947879] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:52.071 [2024-07-22 18:30:03.947887] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=1024, cccid=4 00:22:52.071 [2024-07-22 18:30:03.947895] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=1024 00:22:52.071 [2024-07-22 18:30:03.947907] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.071 [2024-07-22 18:30:03.947920] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:52.071 [2024-07-22 18:30:03.947927] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:52.071 [2024-07-22 18:30:03.947937] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.071 [2024-07-22 18:30:03.947947] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.071 [2024-07-22 18:30:03.947953] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.071 [2024-07-22 18:30:03.947963] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:22:52.071 [2024-07-22 18:30:03.947990] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.071 [2024-07-22 18:30:03.948002] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.071 [2024-07-22 18:30:03.948009] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.071 [2024-07-22 18:30:03.948021] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:22:52.071 [2024-07-22 18:30:03.948070] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.071 [2024-07-22 18:30:03.948091] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:22:52.071 [2024-07-22 18:30:03.948108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.071 [2024-07-22 18:30:03.948150] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:22:52.071 [2024-07-22 18:30:03.952238] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:52.071 [2024-07-22 18:30:03.952266] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:52.071 [2024-07-22 18:30:03.952274] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:52.071 [2024-07-22 18:30:03.952282] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=3072, cccid=4 00:22:52.071 [2024-07-22 18:30:03.952290] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=3072 00:22:52.071 [2024-07-22 18:30:03.952298] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.071 [2024-07-22 18:30:03.952311] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:52.071 [2024-07-22 18:30:03.952318] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:52.071 [2024-07-22 18:30:03.952328] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.071 [2024-07-22 18:30:03.952338] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.071 [2024-07-22 18:30:03.952348] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.071 [2024-07-22 18:30:03.952356] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:22:52.071 [2024-07-22 18:30:03.952381] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.071 [2024-07-22 18:30:03.952392] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:22:52.071 [2024-07-22 18:30:03.952408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.071 [2024-07-22 18:30:03.952452] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:22:52.071 [2024-07-22 18:30:03.952577] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:52.071 [2024-07-22 18:30:03.952593] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:52.071 [2024-07-22 18:30:03.952602] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:52.071 [2024-07-22 18:30:03.952610] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=8, cccid=4 00:22:52.071 [2024-07-22 18:30:03.952618] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=8 00:22:52.071 [2024-07-22 18:30:03.952636] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.071 [2024-07-22 18:30:03.952648] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:52.071 [2024-07-22 18:30:03.952655] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:52.071 [2024-07-22 18:30:03.952681] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.072 [2024-07-22 18:30:03.952693] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.072 [2024-07-22 18:30:03.952700] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.072 [2024-07-22 18:30:03.952707] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:22:52.072 ===================================================== 00:22:52.072 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:52.072 ===================================================== 00:22:52.072 Controller Capabilities/Features 00:22:52.072 ================================ 00:22:52.072 Vendor ID: 0000 00:22:52.072 Subsystem Vendor ID: 0000 00:22:52.072 Serial Number: .................... 00:22:52.072 Model Number: ........................................ 00:22:52.072 Firmware Version: 24.09 00:22:52.072 Recommended Arb Burst: 0 00:22:52.072 IEEE OUI Identifier: 00 00 00 00:22:52.072 Multi-path I/O 00:22:52.072 May have multiple subsystem ports: No 00:22:52.072 May have multiple controllers: No 00:22:52.072 Associated with SR-IOV VF: No 00:22:52.072 Max Data Transfer Size: 131072 00:22:52.072 Max Number of Namespaces: 0 00:22:52.072 Max Number of I/O Queues: 1024 00:22:52.072 NVMe Specification Version (VS): 1.3 00:22:52.072 NVMe Specification Version (Identify): 1.3 00:22:52.072 Maximum Queue Entries: 128 00:22:52.072 Contiguous Queues Required: Yes 00:22:52.072 Arbitration Mechanisms Supported 00:22:52.072 Weighted Round Robin: Not Supported 00:22:52.072 Vendor Specific: Not Supported 00:22:52.072 Reset Timeout: 15000 ms 00:22:52.072 Doorbell Stride: 4 bytes 00:22:52.072 NVM Subsystem Reset: Not Supported 00:22:52.072 Command Sets Supported 00:22:52.072 NVM Command Set: Supported 00:22:52.072 Boot Partition: Not Supported 00:22:52.072 Memory Page Size Minimum: 4096 bytes 00:22:52.072 Memory Page Size Maximum: 4096 bytes 00:22:52.072 Persistent Memory Region: Not Supported 00:22:52.072 Optional Asynchronous Events Supported 00:22:52.072 Namespace Attribute Notices: Not Supported 00:22:52.072 Firmware Activation Notices: Not Supported 00:22:52.072 ANA Change Notices: Not Supported 00:22:52.072 PLE Aggregate Log Change Notices: Not Supported 00:22:52.072 LBA Status Info Alert Notices: Not Supported 00:22:52.072 EGE Aggregate Log Change Notices: Not Supported 00:22:52.072 Normal NVM Subsystem Shutdown event: Not Supported 00:22:52.072 Zone Descriptor Change Notices: Not Supported 00:22:52.072 Discovery Log Change Notices: Supported 00:22:52.072 Controller Attributes 00:22:52.072 128-bit Host Identifier: Not Supported 00:22:52.072 Non-Operational Permissive Mode: Not Supported 00:22:52.072 NVM Sets: Not Supported 00:22:52.072 Read Recovery Levels: Not Supported 00:22:52.072 Endurance Groups: Not Supported 00:22:52.072 Predictable Latency Mode: Not Supported 00:22:52.072 Traffic Based Keep ALive: Not Supported 00:22:52.072 Namespace Granularity: Not Supported 00:22:52.072 SQ Associations: Not Supported 00:22:52.072 UUID List: Not Supported 00:22:52.072 Multi-Domain Subsystem: Not Supported 00:22:52.072 Fixed Capacity Management: Not Supported 00:22:52.072 Variable Capacity Management: Not Supported 00:22:52.072 Delete Endurance Group: Not Supported 00:22:52.072 Delete NVM Set: Not Supported 00:22:52.072 Extended LBA Formats Supported: Not Supported 00:22:52.072 Flexible Data Placement Supported: Not Supported 00:22:52.072 00:22:52.072 Controller Memory Buffer Support 00:22:52.072 ================================ 00:22:52.072 Supported: No 00:22:52.072 00:22:52.072 Persistent Memory Region Support 00:22:52.072 ================================ 00:22:52.072 Supported: No 00:22:52.072 00:22:52.072 Admin Command Set Attributes 00:22:52.072 ============================ 00:22:52.072 Security Send/Receive: Not Supported 00:22:52.072 Format NVM: Not Supported 00:22:52.072 Firmware Activate/Download: Not Supported 00:22:52.072 Namespace Management: Not Supported 00:22:52.072 Device Self-Test: Not Supported 00:22:52.072 Directives: Not Supported 00:22:52.072 NVMe-MI: Not Supported 00:22:52.072 Virtualization Management: Not Supported 00:22:52.072 Doorbell Buffer Config: Not Supported 00:22:52.072 Get LBA Status Capability: Not Supported 00:22:52.072 Command & Feature Lockdown Capability: Not Supported 00:22:52.072 Abort Command Limit: 1 00:22:52.072 Async Event Request Limit: 4 00:22:52.072 Number of Firmware Slots: N/A 00:22:52.072 Firmware Slot 1 Read-Only: N/A 00:22:52.072 Firmware Activation Without Reset: N/A 00:22:52.072 Multiple Update Detection Support: N/A 00:22:52.072 Firmware Update Granularity: No Information Provided 00:22:52.072 Per-Namespace SMART Log: No 00:22:52.072 Asymmetric Namespace Access Log Page: Not Supported 00:22:52.072 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:52.072 Command Effects Log Page: Not Supported 00:22:52.072 Get Log Page Extended Data: Supported 00:22:52.072 Telemetry Log Pages: Not Supported 00:22:52.072 Persistent Event Log Pages: Not Supported 00:22:52.072 Supported Log Pages Log Page: May Support 00:22:52.072 Commands Supported & Effects Log Page: Not Supported 00:22:52.072 Feature Identifiers & Effects Log Page:May Support 00:22:52.072 NVMe-MI Commands & Effects Log Page: May Support 00:22:52.072 Data Area 4 for Telemetry Log: Not Supported 00:22:52.072 Error Log Page Entries Supported: 128 00:22:52.072 Keep Alive: Not Supported 00:22:52.072 00:22:52.072 NVM Command Set Attributes 00:22:52.072 ========================== 00:22:52.072 Submission Queue Entry Size 00:22:52.072 Max: 1 00:22:52.072 Min: 1 00:22:52.072 Completion Queue Entry Size 00:22:52.072 Max: 1 00:22:52.072 Min: 1 00:22:52.072 Number of Namespaces: 0 00:22:52.072 Compare Command: Not Supported 00:22:52.072 Write Uncorrectable Command: Not Supported 00:22:52.072 Dataset Management Command: Not Supported 00:22:52.072 Write Zeroes Command: Not Supported 00:22:52.072 Set Features Save Field: Not Supported 00:22:52.072 Reservations: Not Supported 00:22:52.072 Timestamp: Not Supported 00:22:52.072 Copy: Not Supported 00:22:52.072 Volatile Write Cache: Not Present 00:22:52.072 Atomic Write Unit (Normal): 1 00:22:52.072 Atomic Write Unit (PFail): 1 00:22:52.072 Atomic Compare & Write Unit: 1 00:22:52.072 Fused Compare & Write: Supported 00:22:52.072 Scatter-Gather List 00:22:52.072 SGL Command Set: Supported 00:22:52.072 SGL Keyed: Supported 00:22:52.072 SGL Bit Bucket Descriptor: Not Supported 00:22:52.072 SGL Metadata Pointer: Not Supported 00:22:52.072 Oversized SGL: Not Supported 00:22:52.072 SGL Metadata Address: Not Supported 00:22:52.072 SGL Offset: Supported 00:22:52.072 Transport SGL Data Block: Not Supported 00:22:52.072 Replay Protected Memory Block: Not Supported 00:22:52.072 00:22:52.072 Firmware Slot Information 00:22:52.072 ========================= 00:22:52.072 Active slot: 0 00:22:52.072 00:22:52.072 00:22:52.072 Error Log 00:22:52.072 ========= 00:22:52.072 00:22:52.072 Active Namespaces 00:22:52.072 ================= 00:22:52.072 Discovery Log Page 00:22:52.072 ================== 00:22:52.072 Generation Counter: 2 00:22:52.072 Number of Records: 2 00:22:52.072 Record Format: 0 00:22:52.072 00:22:52.072 Discovery Log Entry 0 00:22:52.072 ---------------------- 00:22:52.072 Transport Type: 3 (TCP) 00:22:52.072 Address Family: 1 (IPv4) 00:22:52.072 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:52.072 Entry Flags: 00:22:52.072 Duplicate Returned Information: 1 00:22:52.072 Explicit Persistent Connection Support for Discovery: 1 00:22:52.072 Transport Requirements: 00:22:52.072 Secure Channel: Not Required 00:22:52.072 Port ID: 0 (0x0000) 00:22:52.072 Controller ID: 65535 (0xffff) 00:22:52.072 Admin Max SQ Size: 128 00:22:52.072 Transport Service Identifier: 4420 00:22:52.072 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:52.072 Transport Address: 10.0.0.2 00:22:52.072 Discovery Log Entry 1 00:22:52.072 ---------------------- 00:22:52.072 Transport Type: 3 (TCP) 00:22:52.072 Address Family: 1 (IPv4) 00:22:52.073 Subsystem Type: 2 (NVM Subsystem) 00:22:52.073 Entry Flags: 00:22:52.073 Duplicate Returned Information: 0 00:22:52.073 Explicit Persistent Connection Support for Discovery: 0 00:22:52.073 Transport Requirements: 00:22:52.073 Secure Channel: Not Required 00:22:52.073 Port ID: 0 (0x0000) 00:22:52.073 Controller ID: 65535 (0xffff) 00:22:52.073 Admin Max SQ Size: 128 00:22:52.073 Transport Service Identifier: 4420 00:22:52.073 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:52.073 Transport Address: 10.0.0.2 [2024-07-22 18:30:03.952892] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:22:52.073 [2024-07-22 18:30:03.952919] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:52.073 [2024-07-22 18:30:03.952933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.073 [2024-07-22 18:30:03.952944] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x61500000f080 00:22:52.073 [2024-07-22 18:30:03.952953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.073 [2024-07-22 18:30:03.952962] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x61500000f080 00:22:52.073 [2024-07-22 18:30:03.952970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.073 [2024-07-22 18:30:03.952978] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:52.073 [2024-07-22 18:30:03.952987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.073 [2024-07-22 18:30:03.953006] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.073 [2024-07-22 18:30:03.953019] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.073 [2024-07-22 18:30:03.953027] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:52.073 [2024-07-22 18:30:03.953042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.073 [2024-07-22 18:30:03.953077] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:52.073 [2024-07-22 18:30:03.953163] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.073 [2024-07-22 18:30:03.953178] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.073 [2024-07-22 18:30:03.953185] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.073 [2024-07-22 18:30:03.953219] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:52.073 [2024-07-22 18:30:03.953237] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.073 [2024-07-22 18:30:03.953252] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.073 [2024-07-22 18:30:03.953260] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:52.073 [2024-07-22 18:30:03.953275] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.073 [2024-07-22 18:30:03.953319] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:52.073 [2024-07-22 18:30:03.953419] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.073 [2024-07-22 18:30:03.953432] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.073 [2024-07-22 18:30:03.953438] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.073 [2024-07-22 18:30:03.953448] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:52.073 [2024-07-22 18:30:03.953458] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:22:52.073 [2024-07-22 18:30:03.953468] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:22:52.073 [2024-07-22 18:30:03.953486] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.073 [2024-07-22 18:30:03.953495] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.073 [2024-07-22 18:30:03.953503] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:52.073 [2024-07-22 18:30:03.953526] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.073 [2024-07-22 18:30:03.953555] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:52.073 [2024-07-22 18:30:03.953628] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.073 [2024-07-22 18:30:03.953640] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.073 [2024-07-22 18:30:03.953646] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.073 [2024-07-22 18:30:03.953653] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:52.073 [2024-07-22 18:30:03.953672] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.073 [2024-07-22 18:30:03.953681] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.073 [2024-07-22 18:30:03.953688] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:52.073 [2024-07-22 18:30:03.953701] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.073 [2024-07-22 18:30:03.953727] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:52.073 [2024-07-22 18:30:03.953801] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.073 [2024-07-22 18:30:03.953815] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.073 [2024-07-22 18:30:03.953822] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.073 [2024-07-22 18:30:03.953829] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:52.073 [2024-07-22 18:30:03.953859] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.073 [2024-07-22 18:30:03.953869] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.073 [2024-07-22 18:30:03.953875] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:52.073 [2024-07-22 18:30:03.953893] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.073 [2024-07-22 18:30:03.953926] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:52.073 [2024-07-22 18:30:03.954020] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.073 [2024-07-22 18:30:03.954034] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.073 [2024-07-22 18:30:03.954041] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.073 [2024-07-22 18:30:03.954048] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:52.073 [2024-07-22 18:30:03.954078] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.073 [2024-07-22 18:30:03.954088] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.073 [2024-07-22 18:30:03.954094] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:52.073 [2024-07-22 18:30:03.954107] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.073 [2024-07-22 18:30:03.954133] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:52.073 [2024-07-22 18:30:03.954223] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.073 [2024-07-22 18:30:03.954253] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.073 [2024-07-22 18:30:03.954262] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.073 [2024-07-22 18:30:03.954269] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:52.073 [2024-07-22 18:30:03.954289] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.073 [2024-07-22 18:30:03.954298] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.073 [2024-07-22 18:30:03.954305] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:52.073 [2024-07-22 18:30:03.954318] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.073 [2024-07-22 18:30:03.954348] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:52.073 [2024-07-22 18:30:03.954420] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.073 [2024-07-22 18:30:03.954440] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.073 [2024-07-22 18:30:03.954448] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.073 [2024-07-22 18:30:03.954455] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:52.073 [2024-07-22 18:30:03.954472] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.073 [2024-07-22 18:30:03.954482] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.073 [2024-07-22 18:30:03.954489] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:52.073 [2024-07-22 18:30:03.954502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.074 [2024-07-22 18:30:03.954529] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:52.074 [2024-07-22 18:30:03.954601] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.074 [2024-07-22 18:30:03.954613] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.074 [2024-07-22 18:30:03.954620] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.074 [2024-07-22 18:30:03.954627] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:52.074 [2024-07-22 18:30:03.954644] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.074 [2024-07-22 18:30:03.954653] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.074 [2024-07-22 18:30:03.954660] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:52.074 [2024-07-22 18:30:03.954677] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.074 [2024-07-22 18:30:03.954704] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:52.074 [2024-07-22 18:30:03.954767] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.074 [2024-07-22 18:30:03.954784] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.074 [2024-07-22 18:30:03.954791] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.074 [2024-07-22 18:30:03.954798] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:52.074 [2024-07-22 18:30:03.954816] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.074 [2024-07-22 18:30:03.954831] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.074 [2024-07-22 18:30:03.954839] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:52.074 [2024-07-22 18:30:03.954852] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.074 [2024-07-22 18:30:03.954878] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:52.074 [2024-07-22 18:30:03.954940] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.074 [2024-07-22 18:30:03.954960] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.074 [2024-07-22 18:30:03.954967] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.074 [2024-07-22 18:30:03.954978] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:52.074 [2024-07-22 18:30:03.955000] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.074 [2024-07-22 18:30:03.955009] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.074 [2024-07-22 18:30:03.955015] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:52.074 [2024-07-22 18:30:03.955034] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.074 [2024-07-22 18:30:03.955062] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:52.074 [2024-07-22 18:30:03.955136] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.074 [2024-07-22 18:30:03.955151] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.074 [2024-07-22 18:30:03.955158] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.074 [2024-07-22 18:30:03.955165] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:52.074 [2024-07-22 18:30:03.955183] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.074 [2024-07-22 18:30:03.955192] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.074 [2024-07-22 18:30:03.955198] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:52.074 [2024-07-22 18:30:03.955223] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.074 [2024-07-22 18:30:03.955252] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:52.074 [2024-07-22 18:30:03.955344] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.074 [2024-07-22 18:30:03.955360] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.074 [2024-07-22 18:30:03.955367] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.074 [2024-07-22 18:30:03.955375] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:52.074 [2024-07-22 18:30:03.955392] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.074 [2024-07-22 18:30:03.955417] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.074 [2024-07-22 18:30:03.955425] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:52.074 [2024-07-22 18:30:03.955443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.074 [2024-07-22 18:30:03.955474] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:52.074 [2024-07-22 18:30:03.955541] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.074 [2024-07-22 18:30:03.955557] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.074 [2024-07-22 18:30:03.955564] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.074 [2024-07-22 18:30:03.955571] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:52.074 [2024-07-22 18:30:03.955593] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.074 [2024-07-22 18:30:03.955602] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.074 [2024-07-22 18:30:03.955609] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:52.074 [2024-07-22 18:30:03.955622] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.074 [2024-07-22 18:30:03.955648] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:52.074 [2024-07-22 18:30:03.955716] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.074 [2024-07-22 18:30:03.955736] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.074 [2024-07-22 18:30:03.955743] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.074 [2024-07-22 18:30:03.955750] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:52.074 [2024-07-22 18:30:03.955771] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.074 [2024-07-22 18:30:03.955780] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.074 [2024-07-22 18:30:03.955787] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:52.074 [2024-07-22 18:30:03.955800] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.074 [2024-07-22 18:30:03.955825] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:52.074 [2024-07-22 18:30:03.955890] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.074 [2024-07-22 18:30:03.955907] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.074 [2024-07-22 18:30:03.955914] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.074 [2024-07-22 18:30:03.955921] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:52.074 [2024-07-22 18:30:03.955938] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.074 [2024-07-22 18:30:03.955947] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.074 [2024-07-22 18:30:03.955954] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:52.074 [2024-07-22 18:30:03.955967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.074 [2024-07-22 18:30:03.955993] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:52.074 [2024-07-22 18:30:03.956055] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.074 [2024-07-22 18:30:03.956071] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.074 [2024-07-22 18:30:03.956078] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.074 [2024-07-22 18:30:03.956085] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:52.074 [2024-07-22 18:30:03.956102] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.074 [2024-07-22 18:30:03.956111] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.075 [2024-07-22 18:30:03.956118] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:52.075 [2024-07-22 18:30:03.956131] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.075 [2024-07-22 18:30:03.956156] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:52.075 [2024-07-22 18:30:03.956242] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.075 [2024-07-22 18:30:03.956260] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.075 [2024-07-22 18:30:03.956267] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.075 [2024-07-22 18:30:03.956274] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:52.075 [2024-07-22 18:30:03.956292] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.075 [2024-07-22 18:30:03.956301] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.075 [2024-07-22 18:30:03.956308] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:52.075 [2024-07-22 18:30:03.956325] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.075 [2024-07-22 18:30:03.956358] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:52.075 [2024-07-22 18:30:03.956418] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.075 [2024-07-22 18:30:03.956430] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.075 [2024-07-22 18:30:03.956436] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.075 [2024-07-22 18:30:03.956443] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:52.075 [2024-07-22 18:30:03.956465] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.075 [2024-07-22 18:30:03.956474] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.075 [2024-07-22 18:30:03.956481] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:52.075 [2024-07-22 18:30:03.956497] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.075 [2024-07-22 18:30:03.956523] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:52.075 [2024-07-22 18:30:03.956592] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.075 [2024-07-22 18:30:03.956609] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.075 [2024-07-22 18:30:03.956619] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.075 [2024-07-22 18:30:03.956627] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:52.075 [2024-07-22 18:30:03.956645] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.075 [2024-07-22 18:30:03.956653] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.075 [2024-07-22 18:30:03.956660] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:52.075 [2024-07-22 18:30:03.956673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.075 [2024-07-22 18:30:03.956699] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:52.075 [2024-07-22 18:30:03.956771] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.075 [2024-07-22 18:30:03.956788] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.075 [2024-07-22 18:30:03.956795] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.075 [2024-07-22 18:30:03.956802] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:52.075 [2024-07-22 18:30:03.956819] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.075 [2024-07-22 18:30:03.956828] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.075 [2024-07-22 18:30:03.956835] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:52.075 [2024-07-22 18:30:03.956848] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.075 [2024-07-22 18:30:03.956873] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:52.075 [2024-07-22 18:30:03.956935] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.075 [2024-07-22 18:30:03.956947] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.075 [2024-07-22 18:30:03.956954] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.075 [2024-07-22 18:30:03.956961] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:52.075 [2024-07-22 18:30:03.956978] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.075 [2024-07-22 18:30:03.956986] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.075 [2024-07-22 18:30:03.956993] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:52.075 [2024-07-22 18:30:03.957006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.075 [2024-07-22 18:30:03.957036] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:52.075 [2024-07-22 18:30:03.957095] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.075 [2024-07-22 18:30:03.957107] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.075 [2024-07-22 18:30:03.957113] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.075 [2024-07-22 18:30:03.957121] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:52.075 [2024-07-22 18:30:03.957148] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.075 [2024-07-22 18:30:03.957159] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.075 [2024-07-22 18:30:03.957172] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:52.075 [2024-07-22 18:30:03.957186] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.075 [2024-07-22 18:30:03.957229] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:52.075 [2024-07-22 18:30:03.957309] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.075 [2024-07-22 18:30:03.957329] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.075 [2024-07-22 18:30:03.957337] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.075 [2024-07-22 18:30:03.957347] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:52.075 [2024-07-22 18:30:03.957366] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.075 [2024-07-22 18:30:03.957375] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.075 [2024-07-22 18:30:03.957382] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:52.075 [2024-07-22 18:30:03.957395] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.075 [2024-07-22 18:30:03.957421] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:52.075 [2024-07-22 18:30:03.957494] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.075 [2024-07-22 18:30:03.957506] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.075 [2024-07-22 18:30:03.957513] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.075 [2024-07-22 18:30:03.957523] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:52.075 [2024-07-22 18:30:03.957541] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.075 [2024-07-22 18:30:03.957550] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.075 [2024-07-22 18:30:03.957557] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:52.075 [2024-07-22 18:30:03.957569] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.075 [2024-07-22 18:30:03.957595] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:52.075 [2024-07-22 18:30:03.957664] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.075 [2024-07-22 18:30:03.957681] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.075 [2024-07-22 18:30:03.957687] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.075 [2024-07-22 18:30:03.957694] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:52.075 [2024-07-22 18:30:03.957712] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.075 [2024-07-22 18:30:03.957721] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.075 [2024-07-22 18:30:03.957727] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:52.075 [2024-07-22 18:30:03.957740] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.075 [2024-07-22 18:30:03.957770] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:52.075 [2024-07-22 18:30:03.957838] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.075 [2024-07-22 18:30:03.957861] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.075 [2024-07-22 18:30:03.957868] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.075 [2024-07-22 18:30:03.957875] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:52.075 [2024-07-22 18:30:03.957893] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.075 [2024-07-22 18:30:03.957906] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.075 [2024-07-22 18:30:03.957913] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:52.075 [2024-07-22 18:30:03.957944] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.075 [2024-07-22 18:30:03.957977] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:52.075 [2024-07-22 18:30:03.958043] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.075 [2024-07-22 18:30:03.958055] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.075 [2024-07-22 18:30:03.958062] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.075 [2024-07-22 18:30:03.958072] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:52.075 [2024-07-22 18:30:03.958091] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.075 [2024-07-22 18:30:03.958099] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.076 [2024-07-22 18:30:03.958106] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:52.076 [2024-07-22 18:30:03.958119] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.076 [2024-07-22 18:30:03.958145] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:52.076 [2024-07-22 18:30:03.958236] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.076 [2024-07-22 18:30:03.958260] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.076 [2024-07-22 18:30:03.958268] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.076 [2024-07-22 18:30:03.958275] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:52.076 [2024-07-22 18:30:03.958294] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.076 [2024-07-22 18:30:03.958303] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.076 [2024-07-22 18:30:03.958310] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:52.076 [2024-07-22 18:30:03.958323] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.076 [2024-07-22 18:30:03.958351] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:52.076 [2024-07-22 18:30:03.958419] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.076 [2024-07-22 18:30:03.958431] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.076 [2024-07-22 18:30:03.958437] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.076 [2024-07-22 18:30:03.958444] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:52.076 [2024-07-22 18:30:03.958462] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.076 [2024-07-22 18:30:03.958470] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.076 [2024-07-22 18:30:03.958477] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:52.076 [2024-07-22 18:30:03.958490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.076 [2024-07-22 18:30:03.958520] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:52.076 [2024-07-22 18:30:03.958587] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.076 [2024-07-22 18:30:03.958603] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.076 [2024-07-22 18:30:03.958611] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.076 [2024-07-22 18:30:03.958618] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:52.076 [2024-07-22 18:30:03.958636] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.076 [2024-07-22 18:30:03.958644] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.076 [2024-07-22 18:30:03.958655] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:52.076 [2024-07-22 18:30:03.958669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.076 [2024-07-22 18:30:03.958695] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:52.076 [2024-07-22 18:30:03.958766] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.076 [2024-07-22 18:30:03.958788] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.076 [2024-07-22 18:30:03.958800] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.076 [2024-07-22 18:30:03.958810] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:52.076 [2024-07-22 18:30:03.958841] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.076 [2024-07-22 18:30:03.958854] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.076 [2024-07-22 18:30:03.958864] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:52.076 [2024-07-22 18:30:03.958883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.076 [2024-07-22 18:30:03.958918] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:52.076 [2024-07-22 18:30:03.958986] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.076 [2024-07-22 18:30:03.959008] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.076 [2024-07-22 18:30:03.959022] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.076 [2024-07-22 18:30:03.959033] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:52.076 [2024-07-22 18:30:03.959057] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.076 [2024-07-22 18:30:03.959069] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.076 [2024-07-22 18:30:03.959079] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:52.076 [2024-07-22 18:30:03.959098] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.076 [2024-07-22 18:30:03.959132] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:52.076 [2024-07-22 18:30:03.959200] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.076 [2024-07-22 18:30:03.959239] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.076 [2024-07-22 18:30:03.959250] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.076 [2024-07-22 18:30:03.959261] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:52.076 [2024-07-22 18:30:03.959286] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.076 [2024-07-22 18:30:03.959299] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.076 [2024-07-22 18:30:03.959310] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:52.076 [2024-07-22 18:30:03.959328] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.076 [2024-07-22 18:30:03.959364] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:52.076 [2024-07-22 18:30:03.959431] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.076 [2024-07-22 18:30:03.959453] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.076 [2024-07-22 18:30:03.959463] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.076 [2024-07-22 18:30:03.959474] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:52.076 [2024-07-22 18:30:03.959498] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.076 [2024-07-22 18:30:03.959511] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.076 [2024-07-22 18:30:03.959521] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:52.076 [2024-07-22 18:30:03.959544] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.076 [2024-07-22 18:30:03.959585] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:52.076 [2024-07-22 18:30:03.959655] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.076 [2024-07-22 18:30:03.959675] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.076 [2024-07-22 18:30:03.959685] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.076 [2024-07-22 18:30:03.959696] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:52.076 [2024-07-22 18:30:03.959720] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.076 [2024-07-22 18:30:03.959738] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.076 [2024-07-22 18:30:03.959745] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:52.076 [2024-07-22 18:30:03.959759] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.076 [2024-07-22 18:30:03.959788] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:52.076 [2024-07-22 18:30:03.959868] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.076 [2024-07-22 18:30:03.959881] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.076 [2024-07-22 18:30:03.959887] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.076 [2024-07-22 18:30:03.959898] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:52.076 [2024-07-22 18:30:03.959916] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.076 [2024-07-22 18:30:03.959925] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.076 [2024-07-22 18:30:03.959932] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:52.076 [2024-07-22 18:30:03.959949] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.076 [2024-07-22 18:30:03.959976] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:52.076 [2024-07-22 18:30:03.960042] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.076 [2024-07-22 18:30:03.960059] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.076 [2024-07-22 18:30:03.960066] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.076 [2024-07-22 18:30:03.960073] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:52.076 [2024-07-22 18:30:03.960095] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.076 [2024-07-22 18:30:03.960104] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.076 [2024-07-22 18:30:03.960111] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:52.076 [2024-07-22 18:30:03.960124] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.076 [2024-07-22 18:30:03.960150] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:52.076 [2024-07-22 18:30:03.964244] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.076 [2024-07-22 18:30:03.964278] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.076 [2024-07-22 18:30:03.964287] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.076 [2024-07-22 18:30:03.964295] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:52.076 [2024-07-22 18:30:03.964317] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.076 [2024-07-22 18:30:03.964327] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.076 [2024-07-22 18:30:03.964334] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:52.076 [2024-07-22 18:30:03.964349] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.077 [2024-07-22 18:30:03.964382] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:52.077 [2024-07-22 18:30:03.964461] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.077 [2024-07-22 18:30:03.964473] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.077 [2024-07-22 18:30:03.964480] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.077 [2024-07-22 18:30:03.964487] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:52.077 [2024-07-22 18:30:03.964501] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 11 milliseconds 00:22:52.077 00:22:52.077 18:30:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:52.338 [2024-07-22 18:30:04.082810] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:22:52.338 [2024-07-22 18:30:04.082970] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81099 ] 00:22:52.338 [2024-07-22 18:30:04.271755] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:22:52.338 [2024-07-22 18:30:04.271893] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:52.338 [2024-07-22 18:30:04.271909] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:52.338 [2024-07-22 18:30:04.271937] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:52.338 [2024-07-22 18:30:04.271954] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:52.338 [2024-07-22 18:30:04.272160] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:22:52.338 [2024-07-22 18:30:04.272264] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x61500000f080 0 00:22:52.338 [2024-07-22 18:30:04.279242] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:52.338 [2024-07-22 18:30:04.279276] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:52.338 [2024-07-22 18:30:04.279291] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:52.338 [2024-07-22 18:30:04.279304] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:52.338 [2024-07-22 18:30:04.279412] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.338 [2024-07-22 18:30:04.279428] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.338 [2024-07-22 18:30:04.279436] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:52.338 [2024-07-22 18:30:04.279460] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:52.338 [2024-07-22 18:30:04.279502] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:52.338 [2024-07-22 18:30:04.287235] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.338 [2024-07-22 18:30:04.287281] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.338 [2024-07-22 18:30:04.287297] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.338 [2024-07-22 18:30:04.287307] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:52.338 [2024-07-22 18:30:04.287328] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:52.338 [2024-07-22 18:30:04.287356] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:22:52.338 [2024-07-22 18:30:04.287372] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:22:52.338 [2024-07-22 18:30:04.287400] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.338 [2024-07-22 18:30:04.287410] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.338 [2024-07-22 18:30:04.287418] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:52.338 [2024-07-22 18:30:04.287449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.338 [2024-07-22 18:30:04.287492] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:52.338 [2024-07-22 18:30:04.287584] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.338 [2024-07-22 18:30:04.287597] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.338 [2024-07-22 18:30:04.287605] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.338 [2024-07-22 18:30:04.287617] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:52.338 [2024-07-22 18:30:04.287629] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:22:52.338 [2024-07-22 18:30:04.287643] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:22:52.338 [2024-07-22 18:30:04.287657] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.338 [2024-07-22 18:30:04.287665] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.338 [2024-07-22 18:30:04.287673] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:52.338 [2024-07-22 18:30:04.287694] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.338 [2024-07-22 18:30:04.287723] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:52.338 [2024-07-22 18:30:04.287799] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.338 [2024-07-22 18:30:04.287810] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.338 [2024-07-22 18:30:04.287817] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.338 [2024-07-22 18:30:04.287824] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:52.338 [2024-07-22 18:30:04.287835] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:22:52.338 [2024-07-22 18:30:04.287850] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:22:52.338 [2024-07-22 18:30:04.287863] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.338 [2024-07-22 18:30:04.287871] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.338 [2024-07-22 18:30:04.287879] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:52.338 [2024-07-22 18:30:04.287897] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.338 [2024-07-22 18:30:04.287924] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:52.338 [2024-07-22 18:30:04.287987] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.338 [2024-07-22 18:30:04.287999] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.338 [2024-07-22 18:30:04.288005] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.338 [2024-07-22 18:30:04.288012] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:52.338 [2024-07-22 18:30:04.288022] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:52.338 [2024-07-22 18:30:04.288048] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.338 [2024-07-22 18:30:04.288057] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.338 [2024-07-22 18:30:04.288065] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:52.338 [2024-07-22 18:30:04.288078] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.338 [2024-07-22 18:30:04.288105] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:52.338 [2024-07-22 18:30:04.288173] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.338 [2024-07-22 18:30:04.288188] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.338 [2024-07-22 18:30:04.288195] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.338 [2024-07-22 18:30:04.288202] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:52.339 [2024-07-22 18:30:04.288227] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:22:52.339 [2024-07-22 18:30:04.288238] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:22:52.339 [2024-07-22 18:30:04.288253] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:52.339 [2024-07-22 18:30:04.288363] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:22:52.339 [2024-07-22 18:30:04.288372] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:52.339 [2024-07-22 18:30:04.288388] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.339 [2024-07-22 18:30:04.288402] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.339 [2024-07-22 18:30:04.288409] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:52.339 [2024-07-22 18:30:04.288424] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.339 [2024-07-22 18:30:04.288459] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:52.339 [2024-07-22 18:30:04.288531] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.339 [2024-07-22 18:30:04.288543] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.339 [2024-07-22 18:30:04.288549] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.339 [2024-07-22 18:30:04.288556] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:52.339 [2024-07-22 18:30:04.288566] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:52.339 [2024-07-22 18:30:04.288587] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.339 [2024-07-22 18:30:04.288599] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.339 [2024-07-22 18:30:04.288607] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:52.339 [2024-07-22 18:30:04.288621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.339 [2024-07-22 18:30:04.288651] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:52.339 [2024-07-22 18:30:04.288719] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.339 [2024-07-22 18:30:04.288730] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.339 [2024-07-22 18:30:04.288742] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.339 [2024-07-22 18:30:04.288750] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:52.339 [2024-07-22 18:30:04.288759] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:52.339 [2024-07-22 18:30:04.288769] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:22:52.339 [2024-07-22 18:30:04.288782] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:22:52.339 [2024-07-22 18:30:04.288801] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:22:52.339 [2024-07-22 18:30:04.288821] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.339 [2024-07-22 18:30:04.288829] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:52.339 [2024-07-22 18:30:04.288844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.339 [2024-07-22 18:30:04.288887] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:52.339 [2024-07-22 18:30:04.289023] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:52.339 [2024-07-22 18:30:04.289035] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:52.339 [2024-07-22 18:30:04.289042] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:52.339 [2024-07-22 18:30:04.289052] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=0 00:22:52.339 [2024-07-22 18:30:04.289062] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:22:52.339 [2024-07-22 18:30:04.289074] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.339 [2024-07-22 18:30:04.289092] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:52.339 [2024-07-22 18:30:04.289101] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:52.339 [2024-07-22 18:30:04.289115] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.339 [2024-07-22 18:30:04.289125] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.339 [2024-07-22 18:30:04.289131] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.339 [2024-07-22 18:30:04.289138] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:52.339 [2024-07-22 18:30:04.289157] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:22:52.339 [2024-07-22 18:30:04.289167] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:22:52.339 [2024-07-22 18:30:04.289176] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:22:52.339 [2024-07-22 18:30:04.289191] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:22:52.339 [2024-07-22 18:30:04.289200] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:22:52.339 [2024-07-22 18:30:04.289225] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:22:52.339 [2024-07-22 18:30:04.289242] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:22:52.339 [2024-07-22 18:30:04.289260] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.339 [2024-07-22 18:30:04.289269] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.339 [2024-07-22 18:30:04.289276] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:52.339 [2024-07-22 18:30:04.289292] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:52.339 [2024-07-22 18:30:04.289323] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:52.339 [2024-07-22 18:30:04.289396] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.339 [2024-07-22 18:30:04.289420] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.339 [2024-07-22 18:30:04.289428] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.339 [2024-07-22 18:30:04.289435] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:52.339 [2024-07-22 18:30:04.289449] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.339 [2024-07-22 18:30:04.289457] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.339 [2024-07-22 18:30:04.289465] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:22:52.339 [2024-07-22 18:30:04.289489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.339 [2024-07-22 18:30:04.289503] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.339 [2024-07-22 18:30:04.289518] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.339 [2024-07-22 18:30:04.289525] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x61500000f080) 00:22:52.339 [2024-07-22 18:30:04.289536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.339 [2024-07-22 18:30:04.289546] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.339 [2024-07-22 18:30:04.289553] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.339 [2024-07-22 18:30:04.289560] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x61500000f080) 00:22:52.339 [2024-07-22 18:30:04.289570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.339 [2024-07-22 18:30:04.289580] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.339 [2024-07-22 18:30:04.289587] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.339 [2024-07-22 18:30:04.289593] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:52.339 [2024-07-22 18:30:04.289603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.339 [2024-07-22 18:30:04.289612] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:52.339 [2024-07-22 18:30:04.289639] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:52.339 [2024-07-22 18:30:04.289652] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.339 [2024-07-22 18:30:04.289659] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:22:52.339 [2024-07-22 18:30:04.289673] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.339 [2024-07-22 18:30:04.289704] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:22:52.339 [2024-07-22 18:30:04.289715] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:22:52.339 [2024-07-22 18:30:04.289723] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:22:52.339 [2024-07-22 18:30:04.289731] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:52.339 [2024-07-22 18:30:04.289739] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:22:52.339 [2024-07-22 18:30:04.289878] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.339 [2024-07-22 18:30:04.289891] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.339 [2024-07-22 18:30:04.289898] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.339 [2024-07-22 18:30:04.289905] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:22:52.339 [2024-07-22 18:30:04.289916] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:22:52.339 [2024-07-22 18:30:04.289927] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:52.339 [2024-07-22 18:30:04.289947] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:22:52.339 [2024-07-22 18:30:04.289961] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:52.339 [2024-07-22 18:30:04.289977] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.339 [2024-07-22 18:30:04.289985] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.340 [2024-07-22 18:30:04.289996] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:22:52.340 [2024-07-22 18:30:04.290011] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:52.340 [2024-07-22 18:30:04.290039] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:22:52.340 [2024-07-22 18:30:04.290119] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.340 [2024-07-22 18:30:04.290130] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.340 [2024-07-22 18:30:04.290136] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.340 [2024-07-22 18:30:04.290146] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:22:52.340 [2024-07-22 18:30:04.290262] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:22:52.340 [2024-07-22 18:30:04.290295] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:52.340 [2024-07-22 18:30:04.290314] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.340 [2024-07-22 18:30:04.290323] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:22:52.340 [2024-07-22 18:30:04.290338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.340 [2024-07-22 18:30:04.290372] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:22:52.340 [2024-07-22 18:30:04.290470] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:52.340 [2024-07-22 18:30:04.290482] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:52.340 [2024-07-22 18:30:04.290488] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:52.340 [2024-07-22 18:30:04.290498] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:22:52.340 [2024-07-22 18:30:04.290507] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:22:52.340 [2024-07-22 18:30:04.290515] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.340 [2024-07-22 18:30:04.290528] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:52.340 [2024-07-22 18:30:04.290535] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:52.340 [2024-07-22 18:30:04.290548] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.340 [2024-07-22 18:30:04.290561] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.340 [2024-07-22 18:30:04.290567] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.340 [2024-07-22 18:30:04.290574] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:22:52.340 [2024-07-22 18:30:04.290635] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:22:52.340 [2024-07-22 18:30:04.290669] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:22:52.340 [2024-07-22 18:30:04.290702] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:22:52.340 [2024-07-22 18:30:04.290721] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.340 [2024-07-22 18:30:04.290730] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:22:52.340 [2024-07-22 18:30:04.290748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.340 [2024-07-22 18:30:04.290786] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:22:52.340 [2024-07-22 18:30:04.290938] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:52.340 [2024-07-22 18:30:04.290954] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:52.340 [2024-07-22 18:30:04.290960] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:52.340 [2024-07-22 18:30:04.290967] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:22:52.340 [2024-07-22 18:30:04.290975] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:22:52.340 [2024-07-22 18:30:04.290983] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.340 [2024-07-22 18:30:04.290998] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:52.340 [2024-07-22 18:30:04.291006] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:52.340 [2024-07-22 18:30:04.291019] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.340 [2024-07-22 18:30:04.291029] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.340 [2024-07-22 18:30:04.291035] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.340 [2024-07-22 18:30:04.291042] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:22:52.340 [2024-07-22 18:30:04.291081] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:52.340 [2024-07-22 18:30:04.291106] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:52.340 [2024-07-22 18:30:04.291124] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.340 [2024-07-22 18:30:04.291133] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:22:52.340 [2024-07-22 18:30:04.291151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.340 [2024-07-22 18:30:04.291184] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:22:52.340 [2024-07-22 18:30:04.295239] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:52.340 [2024-07-22 18:30:04.295265] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:52.340 [2024-07-22 18:30:04.295274] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:52.340 [2024-07-22 18:30:04.295281] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:22:52.340 [2024-07-22 18:30:04.295289] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:22:52.340 [2024-07-22 18:30:04.295297] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.340 [2024-07-22 18:30:04.295310] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:52.340 [2024-07-22 18:30:04.295318] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:52.340 [2024-07-22 18:30:04.295333] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.340 [2024-07-22 18:30:04.295343] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.340 [2024-07-22 18:30:04.295350] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.340 [2024-07-22 18:30:04.295357] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:22:52.340 [2024-07-22 18:30:04.295396] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:52.340 [2024-07-22 18:30:04.295428] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:22:52.340 [2024-07-22 18:30:04.295449] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:22:52.340 [2024-07-22 18:30:04.295464] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:52.340 [2024-07-22 18:30:04.295474] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:52.340 [2024-07-22 18:30:04.295484] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:22:52.340 [2024-07-22 18:30:04.295492] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:22:52.340 [2024-07-22 18:30:04.295501] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:22:52.340 [2024-07-22 18:30:04.295510] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:22:52.340 [2024-07-22 18:30:04.295554] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.340 [2024-07-22 18:30:04.295564] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:22:52.340 [2024-07-22 18:30:04.295581] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.340 [2024-07-22 18:30:04.295593] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.340 [2024-07-22 18:30:04.295605] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.340 [2024-07-22 18:30:04.295613] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:22:52.340 [2024-07-22 18:30:04.295624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.340 [2024-07-22 18:30:04.295661] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:22:52.340 [2024-07-22 18:30:04.295674] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:22:52.340 [2024-07-22 18:30:04.295769] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.340 [2024-07-22 18:30:04.295781] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.340 [2024-07-22 18:30:04.295788] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.340 [2024-07-22 18:30:04.295796] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:22:52.340 [2024-07-22 18:30:04.295812] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.340 [2024-07-22 18:30:04.295822] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.340 [2024-07-22 18:30:04.295828] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.340 [2024-07-22 18:30:04.295835] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:22:52.340 [2024-07-22 18:30:04.295851] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.340 [2024-07-22 18:30:04.295859] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:22:52.340 [2024-07-22 18:30:04.295872] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.340 [2024-07-22 18:30:04.295899] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:22:52.341 [2024-07-22 18:30:04.295973] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.341 [2024-07-22 18:30:04.295984] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.341 [2024-07-22 18:30:04.295991] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.341 [2024-07-22 18:30:04.295998] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:22:52.341 [2024-07-22 18:30:04.296013] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.341 [2024-07-22 18:30:04.296021] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:22:52.341 [2024-07-22 18:30:04.296045] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.341 [2024-07-22 18:30:04.296076] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:22:52.341 [2024-07-22 18:30:04.296156] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.341 [2024-07-22 18:30:04.296170] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.341 [2024-07-22 18:30:04.296177] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.341 [2024-07-22 18:30:04.296184] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:22:52.341 [2024-07-22 18:30:04.296200] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.341 [2024-07-22 18:30:04.296225] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:22:52.341 [2024-07-22 18:30:04.296243] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.341 [2024-07-22 18:30:04.296271] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:22:52.341 [2024-07-22 18:30:04.296352] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.341 [2024-07-22 18:30:04.296363] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.341 [2024-07-22 18:30:04.296370] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.341 [2024-07-22 18:30:04.296377] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:22:52.341 [2024-07-22 18:30:04.296412] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.341 [2024-07-22 18:30:04.296423] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:22:52.341 [2024-07-22 18:30:04.296440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.341 [2024-07-22 18:30:04.296457] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.341 [2024-07-22 18:30:04.296465] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:22:52.341 [2024-07-22 18:30:04.296478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.341 [2024-07-22 18:30:04.296490] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.341 [2024-07-22 18:30:04.296498] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x61500000f080) 00:22:52.341 [2024-07-22 18:30:04.296510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.341 [2024-07-22 18:30:04.296525] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.341 [2024-07-22 18:30:04.296533] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x61500000f080) 00:22:52.341 [2024-07-22 18:30:04.296545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.341 [2024-07-22 18:30:04.296574] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:22:52.341 [2024-07-22 18:30:04.296593] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:22:52.341 [2024-07-22 18:30:04.296601] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001ba00, cid 6, qid 0 00:22:52.341 [2024-07-22 18:30:04.296609] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:22:52.341 [2024-07-22 18:30:04.296801] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:52.341 [2024-07-22 18:30:04.296814] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:52.341 [2024-07-22 18:30:04.296820] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:52.341 [2024-07-22 18:30:04.296828] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=8192, cccid=5 00:22:52.341 [2024-07-22 18:30:04.296837] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b880) on tqpair(0x61500000f080): expected_datao=0, payload_size=8192 00:22:52.341 [2024-07-22 18:30:04.296845] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.341 [2024-07-22 18:30:04.296879] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:52.341 [2024-07-22 18:30:04.296889] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:52.341 [2024-07-22 18:30:04.296904] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:52.341 [2024-07-22 18:30:04.296914] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:52.341 [2024-07-22 18:30:04.296920] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:52.341 [2024-07-22 18:30:04.296926] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=512, cccid=4 00:22:52.341 [2024-07-22 18:30:04.296933] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=512 00:22:52.341 [2024-07-22 18:30:04.296940] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.341 [2024-07-22 18:30:04.296951] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:52.341 [2024-07-22 18:30:04.296957] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:52.341 [2024-07-22 18:30:04.296966] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:52.341 [2024-07-22 18:30:04.296977] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:52.341 [2024-07-22 18:30:04.296984] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:52.341 [2024-07-22 18:30:04.296990] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=512, cccid=6 00:22:52.341 [2024-07-22 18:30:04.296997] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001ba00) on tqpair(0x61500000f080): expected_datao=0, payload_size=512 00:22:52.341 [2024-07-22 18:30:04.297004] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.341 [2024-07-22 18:30:04.297017] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:52.341 [2024-07-22 18:30:04.297023] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:52.341 [2024-07-22 18:30:04.297032] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:52.341 [2024-07-22 18:30:04.297041] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:52.341 [2024-07-22 18:30:04.297047] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:52.341 [2024-07-22 18:30:04.297053] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=7 00:22:52.341 [2024-07-22 18:30:04.297063] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001bb80) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:22:52.341 [2024-07-22 18:30:04.297070] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.341 [2024-07-22 18:30:04.297081] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:52.341 [2024-07-22 18:30:04.297087] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:52.341 [2024-07-22 18:30:04.297096] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.341 [2024-07-22 18:30:04.297105] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.341 [2024-07-22 18:30:04.297111] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.341 [2024-07-22 18:30:04.297118] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:22:52.341 [2024-07-22 18:30:04.297146] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.341 [2024-07-22 18:30:04.297159] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.341 [2024-07-22 18:30:04.297168] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.341 [2024-07-22 18:30:04.297174] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:22:52.341 [2024-07-22 18:30:04.297190] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.341 [2024-07-22 18:30:04.297200] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.341 [2024-07-22 18:30:04.297221] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.341 [2024-07-22 18:30:04.297228] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001ba00) on tqpair=0x61500000f080 00:22:52.341 [2024-07-22 18:30:04.297242] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.341 [2024-07-22 18:30:04.297251] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.341 [2024-07-22 18:30:04.297257] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.341 [2024-07-22 18:30:04.297264] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x61500000f080 00:22:52.341 ===================================================== 00:22:52.341 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:52.341 ===================================================== 00:22:52.341 Controller Capabilities/Features 00:22:52.341 ================================ 00:22:52.341 Vendor ID: 8086 00:22:52.341 Subsystem Vendor ID: 8086 00:22:52.341 Serial Number: SPDK00000000000001 00:22:52.341 Model Number: SPDK bdev Controller 00:22:52.341 Firmware Version: 24.09 00:22:52.341 Recommended Arb Burst: 6 00:22:52.341 IEEE OUI Identifier: e4 d2 5c 00:22:52.341 Multi-path I/O 00:22:52.341 May have multiple subsystem ports: Yes 00:22:52.341 May have multiple controllers: Yes 00:22:52.341 Associated with SR-IOV VF: No 00:22:52.341 Max Data Transfer Size: 131072 00:22:52.341 Max Number of Namespaces: 32 00:22:52.341 Max Number of I/O Queues: 127 00:22:52.341 NVMe Specification Version (VS): 1.3 00:22:52.341 NVMe Specification Version (Identify): 1.3 00:22:52.341 Maximum Queue Entries: 128 00:22:52.341 Contiguous Queues Required: Yes 00:22:52.341 Arbitration Mechanisms Supported 00:22:52.341 Weighted Round Robin: Not Supported 00:22:52.341 Vendor Specific: Not Supported 00:22:52.341 Reset Timeout: 15000 ms 00:22:52.341 Doorbell Stride: 4 bytes 00:22:52.341 NVM Subsystem Reset: Not Supported 00:22:52.341 Command Sets Supported 00:22:52.341 NVM Command Set: Supported 00:22:52.341 Boot Partition: Not Supported 00:22:52.341 Memory Page Size Minimum: 4096 bytes 00:22:52.342 Memory Page Size Maximum: 4096 bytes 00:22:52.342 Persistent Memory Region: Not Supported 00:22:52.342 Optional Asynchronous Events Supported 00:22:52.342 Namespace Attribute Notices: Supported 00:22:52.342 Firmware Activation Notices: Not Supported 00:22:52.342 ANA Change Notices: Not Supported 00:22:52.342 PLE Aggregate Log Change Notices: Not Supported 00:22:52.342 LBA Status Info Alert Notices: Not Supported 00:22:52.342 EGE Aggregate Log Change Notices: Not Supported 00:22:52.342 Normal NVM Subsystem Shutdown event: Not Supported 00:22:52.342 Zone Descriptor Change Notices: Not Supported 00:22:52.342 Discovery Log Change Notices: Not Supported 00:22:52.342 Controller Attributes 00:22:52.342 128-bit Host Identifier: Supported 00:22:52.342 Non-Operational Permissive Mode: Not Supported 00:22:52.342 NVM Sets: Not Supported 00:22:52.342 Read Recovery Levels: Not Supported 00:22:52.342 Endurance Groups: Not Supported 00:22:52.342 Predictable Latency Mode: Not Supported 00:22:52.342 Traffic Based Keep ALive: Not Supported 00:22:52.342 Namespace Granularity: Not Supported 00:22:52.342 SQ Associations: Not Supported 00:22:52.342 UUID List: Not Supported 00:22:52.342 Multi-Domain Subsystem: Not Supported 00:22:52.342 Fixed Capacity Management: Not Supported 00:22:52.342 Variable Capacity Management: Not Supported 00:22:52.342 Delete Endurance Group: Not Supported 00:22:52.342 Delete NVM Set: Not Supported 00:22:52.342 Extended LBA Formats Supported: Not Supported 00:22:52.342 Flexible Data Placement Supported: Not Supported 00:22:52.342 00:22:52.342 Controller Memory Buffer Support 00:22:52.342 ================================ 00:22:52.342 Supported: No 00:22:52.342 00:22:52.342 Persistent Memory Region Support 00:22:52.342 ================================ 00:22:52.342 Supported: No 00:22:52.342 00:22:52.342 Admin Command Set Attributes 00:22:52.342 ============================ 00:22:52.342 Security Send/Receive: Not Supported 00:22:52.342 Format NVM: Not Supported 00:22:52.342 Firmware Activate/Download: Not Supported 00:22:52.342 Namespace Management: Not Supported 00:22:52.342 Device Self-Test: Not Supported 00:22:52.342 Directives: Not Supported 00:22:52.342 NVMe-MI: Not Supported 00:22:52.342 Virtualization Management: Not Supported 00:22:52.342 Doorbell Buffer Config: Not Supported 00:22:52.342 Get LBA Status Capability: Not Supported 00:22:52.342 Command & Feature Lockdown Capability: Not Supported 00:22:52.342 Abort Command Limit: 4 00:22:52.342 Async Event Request Limit: 4 00:22:52.342 Number of Firmware Slots: N/A 00:22:52.342 Firmware Slot 1 Read-Only: N/A 00:22:52.342 Firmware Activation Without Reset: N/A 00:22:52.342 Multiple Update Detection Support: N/A 00:22:52.342 Firmware Update Granularity: No Information Provided 00:22:52.342 Per-Namespace SMART Log: No 00:22:52.342 Asymmetric Namespace Access Log Page: Not Supported 00:22:52.342 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:52.342 Command Effects Log Page: Supported 00:22:52.342 Get Log Page Extended Data: Supported 00:22:52.342 Telemetry Log Pages: Not Supported 00:22:52.342 Persistent Event Log Pages: Not Supported 00:22:52.342 Supported Log Pages Log Page: May Support 00:22:52.342 Commands Supported & Effects Log Page: Not Supported 00:22:52.342 Feature Identifiers & Effects Log Page:May Support 00:22:52.342 NVMe-MI Commands & Effects Log Page: May Support 00:22:52.342 Data Area 4 for Telemetry Log: Not Supported 00:22:52.342 Error Log Page Entries Supported: 128 00:22:52.342 Keep Alive: Supported 00:22:52.342 Keep Alive Granularity: 10000 ms 00:22:52.342 00:22:52.342 NVM Command Set Attributes 00:22:52.342 ========================== 00:22:52.342 Submission Queue Entry Size 00:22:52.342 Max: 64 00:22:52.342 Min: 64 00:22:52.342 Completion Queue Entry Size 00:22:52.342 Max: 16 00:22:52.342 Min: 16 00:22:52.342 Number of Namespaces: 32 00:22:52.342 Compare Command: Supported 00:22:52.342 Write Uncorrectable Command: Not Supported 00:22:52.342 Dataset Management Command: Supported 00:22:52.342 Write Zeroes Command: Supported 00:22:52.342 Set Features Save Field: Not Supported 00:22:52.342 Reservations: Supported 00:22:52.342 Timestamp: Not Supported 00:22:52.342 Copy: Supported 00:22:52.342 Volatile Write Cache: Present 00:22:52.342 Atomic Write Unit (Normal): 1 00:22:52.342 Atomic Write Unit (PFail): 1 00:22:52.342 Atomic Compare & Write Unit: 1 00:22:52.342 Fused Compare & Write: Supported 00:22:52.342 Scatter-Gather List 00:22:52.342 SGL Command Set: Supported 00:22:52.342 SGL Keyed: Supported 00:22:52.342 SGL Bit Bucket Descriptor: Not Supported 00:22:52.342 SGL Metadata Pointer: Not Supported 00:22:52.342 Oversized SGL: Not Supported 00:22:52.342 SGL Metadata Address: Not Supported 00:22:52.342 SGL Offset: Supported 00:22:52.342 Transport SGL Data Block: Not Supported 00:22:52.342 Replay Protected Memory Block: Not Supported 00:22:52.342 00:22:52.342 Firmware Slot Information 00:22:52.342 ========================= 00:22:52.342 Active slot: 1 00:22:52.342 Slot 1 Firmware Revision: 24.09 00:22:52.342 00:22:52.342 00:22:52.342 Commands Supported and Effects 00:22:52.342 ============================== 00:22:52.342 Admin Commands 00:22:52.342 -------------- 00:22:52.342 Get Log Page (02h): Supported 00:22:52.342 Identify (06h): Supported 00:22:52.342 Abort (08h): Supported 00:22:52.342 Set Features (09h): Supported 00:22:52.342 Get Features (0Ah): Supported 00:22:52.342 Asynchronous Event Request (0Ch): Supported 00:22:52.342 Keep Alive (18h): Supported 00:22:52.342 I/O Commands 00:22:52.342 ------------ 00:22:52.342 Flush (00h): Supported LBA-Change 00:22:52.342 Write (01h): Supported LBA-Change 00:22:52.342 Read (02h): Supported 00:22:52.342 Compare (05h): Supported 00:22:52.342 Write Zeroes (08h): Supported LBA-Change 00:22:52.342 Dataset Management (09h): Supported LBA-Change 00:22:52.342 Copy (19h): Supported LBA-Change 00:22:52.342 00:22:52.342 Error Log 00:22:52.342 ========= 00:22:52.342 00:22:52.342 Arbitration 00:22:52.342 =========== 00:22:52.342 Arbitration Burst: 1 00:22:52.342 00:22:52.342 Power Management 00:22:52.342 ================ 00:22:52.342 Number of Power States: 1 00:22:52.342 Current Power State: Power State #0 00:22:52.342 Power State #0: 00:22:52.342 Max Power: 0.00 W 00:22:52.342 Non-Operational State: Operational 00:22:52.342 Entry Latency: Not Reported 00:22:52.342 Exit Latency: Not Reported 00:22:52.342 Relative Read Throughput: 0 00:22:52.342 Relative Read Latency: 0 00:22:52.342 Relative Write Throughput: 0 00:22:52.342 Relative Write Latency: 0 00:22:52.342 Idle Power: Not Reported 00:22:52.342 Active Power: Not Reported 00:22:52.342 Non-Operational Permissive Mode: Not Supported 00:22:52.342 00:22:52.342 Health Information 00:22:52.342 ================== 00:22:52.342 Critical Warnings: 00:22:52.342 Available Spare Space: OK 00:22:52.342 Temperature: OK 00:22:52.342 Device Reliability: OK 00:22:52.342 Read Only: No 00:22:52.342 Volatile Memory Backup: OK 00:22:52.342 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:52.342 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:52.342 Available Spare: 0% 00:22:52.342 Available Spare Threshold: 0% 00:22:52.342 Life Percentage Used:[2024-07-22 18:30:04.297468] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.342 [2024-07-22 18:30:04.297484] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x61500000f080) 00:22:52.342 [2024-07-22 18:30:04.297503] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.342 [2024-07-22 18:30:04.297538] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:22:52.342 [2024-07-22 18:30:04.297626] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.342 [2024-07-22 18:30:04.297638] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.342 [2024-07-22 18:30:04.297645] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.342 [2024-07-22 18:30:04.297653] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x61500000f080 00:22:52.342 [2024-07-22 18:30:04.297746] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:22:52.343 [2024-07-22 18:30:04.297765] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:22:52.343 [2024-07-22 18:30:04.297778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.343 [2024-07-22 18:30:04.297788] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x61500000f080 00:22:52.343 [2024-07-22 18:30:04.297797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.343 [2024-07-22 18:30:04.297806] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x61500000f080 00:22:52.343 [2024-07-22 18:30:04.297827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.343 [2024-07-22 18:30:04.297835] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:52.343 [2024-07-22 18:30:04.297862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.343 [2024-07-22 18:30:04.297879] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.343 [2024-07-22 18:30:04.297888] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.343 [2024-07-22 18:30:04.297900] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:52.343 [2024-07-22 18:30:04.297915] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.343 [2024-07-22 18:30:04.297952] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:52.343 [2024-07-22 18:30:04.298027] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.343 [2024-07-22 18:30:04.298043] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.343 [2024-07-22 18:30:04.298051] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.343 [2024-07-22 18:30:04.298059] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:52.343 [2024-07-22 18:30:04.298073] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.343 [2024-07-22 18:30:04.298081] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.343 [2024-07-22 18:30:04.298093] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:52.343 [2024-07-22 18:30:04.298107] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.343 [2024-07-22 18:30:04.298139] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:52.343 [2024-07-22 18:30:04.298266] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.343 [2024-07-22 18:30:04.298281] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.343 [2024-07-22 18:30:04.298287] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.343 [2024-07-22 18:30:04.298294] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:52.343 [2024-07-22 18:30:04.298304] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:22:52.343 [2024-07-22 18:30:04.298313] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:22:52.343 [2024-07-22 18:30:04.298334] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.343 [2024-07-22 18:30:04.298343] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.343 [2024-07-22 18:30:04.298351] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:52.343 [2024-07-22 18:30:04.298365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.343 [2024-07-22 18:30:04.298399] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:52.343 [2024-07-22 18:30:04.298466] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.343 [2024-07-22 18:30:04.298478] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.343 [2024-07-22 18:30:04.298487] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.343 [2024-07-22 18:30:04.298494] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:52.343 [2024-07-22 18:30:04.298513] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.343 [2024-07-22 18:30:04.298521] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.343 [2024-07-22 18:30:04.298528] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:52.343 [2024-07-22 18:30:04.298541] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.343 [2024-07-22 18:30:04.298566] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:52.343 [2024-07-22 18:30:04.298640] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.343 [2024-07-22 18:30:04.298651] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.343 [2024-07-22 18:30:04.298658] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.343 [2024-07-22 18:30:04.298665] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:52.343 [2024-07-22 18:30:04.298682] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.343 [2024-07-22 18:30:04.298690] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.343 [2024-07-22 18:30:04.298696] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:52.343 [2024-07-22 18:30:04.298709] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.343 [2024-07-22 18:30:04.298733] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:52.343 [2024-07-22 18:30:04.298803] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.343 [2024-07-22 18:30:04.298816] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.343 [2024-07-22 18:30:04.298822] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.343 [2024-07-22 18:30:04.298829] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:52.343 [2024-07-22 18:30:04.298846] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.343 [2024-07-22 18:30:04.298854] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.343 [2024-07-22 18:30:04.298860] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:52.343 [2024-07-22 18:30:04.298877] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.343 [2024-07-22 18:30:04.298904] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:52.343 [2024-07-22 18:30:04.298969] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.343 [2024-07-22 18:30:04.298983] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.343 [2024-07-22 18:30:04.298990] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.343 [2024-07-22 18:30:04.298997] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:52.343 [2024-07-22 18:30:04.299014] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.343 [2024-07-22 18:30:04.299022] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.343 [2024-07-22 18:30:04.299031] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:52.343 [2024-07-22 18:30:04.299044] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.343 [2024-07-22 18:30:04.299068] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:52.343 [2024-07-22 18:30:04.299135] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.343 [2024-07-22 18:30:04.299147] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.343 [2024-07-22 18:30:04.299153] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.343 [2024-07-22 18:30:04.299160] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:52.343 [2024-07-22 18:30:04.299180] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:52.343 [2024-07-22 18:30:04.299188] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:52.343 [2024-07-22 18:30:04.299194] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:22:52.343 [2024-07-22 18:30:04.303229] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.343 [2024-07-22 18:30:04.303284] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:22:52.343 [2024-07-22 18:30:04.303359] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:52.343 [2024-07-22 18:30:04.303378] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:52.343 [2024-07-22 18:30:04.303385] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:52.343 [2024-07-22 18:30:04.303392] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:22:52.344 [2024-07-22 18:30:04.303408] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:22:52.344 0% 00:22:52.344 Data Units Read: 0 00:22:52.344 Data Units Written: 0 00:22:52.344 Host Read Commands: 0 00:22:52.344 Host Write Commands: 0 00:22:52.344 Controller Busy Time: 0 minutes 00:22:52.344 Power Cycles: 0 00:22:52.344 Power On Hours: 0 hours 00:22:52.344 Unsafe Shutdowns: 0 00:22:52.344 Unrecoverable Media Errors: 0 00:22:52.344 Lifetime Error Log Entries: 0 00:22:52.344 Warning Temperature Time: 0 minutes 00:22:52.344 Critical Temperature Time: 0 minutes 00:22:52.344 00:22:52.344 Number of Queues 00:22:52.344 ================ 00:22:52.344 Number of I/O Submission Queues: 127 00:22:52.344 Number of I/O Completion Queues: 127 00:22:52.344 00:22:52.344 Active Namespaces 00:22:52.344 ================= 00:22:52.344 Namespace ID:1 00:22:52.344 Error Recovery Timeout: Unlimited 00:22:52.344 Command Set Identifier: NVM (00h) 00:22:52.344 Deallocate: Supported 00:22:52.344 Deallocated/Unwritten Error: Not Supported 00:22:52.344 Deallocated Read Value: Unknown 00:22:52.344 Deallocate in Write Zeroes: Not Supported 00:22:52.344 Deallocated Guard Field: 0xFFFF 00:22:52.344 Flush: Supported 00:22:52.344 Reservation: Supported 00:22:52.344 Namespace Sharing Capabilities: Multiple Controllers 00:22:52.344 Size (in LBAs): 131072 (0GiB) 00:22:52.344 Capacity (in LBAs): 131072 (0GiB) 00:22:52.344 Utilization (in LBAs): 131072 (0GiB) 00:22:52.344 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:52.344 EUI64: ABCDEF0123456789 00:22:52.344 UUID: 7238eee1-3a07-4939-8419-883b84039bbe 00:22:52.344 Thin Provisioning: Not Supported 00:22:52.344 Per-NS Atomic Units: Yes 00:22:52.344 Atomic Boundary Size (Normal): 0 00:22:52.344 Atomic Boundary Size (PFail): 0 00:22:52.344 Atomic Boundary Offset: 0 00:22:52.344 Maximum Single Source Range Length: 65535 00:22:52.344 Maximum Copy Length: 65535 00:22:52.344 Maximum Source Range Count: 1 00:22:52.344 NGUID/EUI64 Never Reused: No 00:22:52.344 Namespace Write Protected: No 00:22:52.344 Number of LBA Formats: 1 00:22:52.344 Current LBA Format: LBA Format #00 00:22:52.344 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:52.344 00:22:52.602 18:30:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:52.602 18:30:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:52.602 18:30:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.602 18:30:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:52.602 18:30:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.602 18:30:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:52.602 18:30:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:52.602 18:30:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:52.602 18:30:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:22:52.602 18:30:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:52.602 18:30:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:22:52.602 18:30:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:52.602 18:30:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:52.602 rmmod nvme_tcp 00:22:52.602 rmmod nvme_fabrics 00:22:52.602 rmmod nvme_keyring 00:22:52.602 18:30:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:52.602 18:30:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:22:52.602 18:30:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:22:52.602 18:30:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 81055 ']' 00:22:52.602 18:30:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 81055 00:22:52.602 18:30:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 81055 ']' 00:22:52.602 18:30:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 81055 00:22:52.602 18:30:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:22:52.602 18:30:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:52.602 18:30:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81055 00:22:52.602 18:30:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:52.602 18:30:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:52.602 killing process with pid 81055 00:22:52.602 18:30:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81055' 00:22:52.602 18:30:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@967 -- # kill 81055 00:22:52.602 18:30:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # wait 81055 00:22:53.977 18:30:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:53.977 18:30:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:53.977 18:30:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:53.977 18:30:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:53.977 18:30:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:53.977 18:30:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:53.977 18:30:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:53.977 18:30:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:53.977 18:30:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:53.977 ************************************ 00:22:53.977 END TEST nvmf_identify 00:22:53.977 ************************************ 00:22:53.977 00:22:53.977 real 0m3.883s 00:22:53.977 user 0m10.372s 00:22:53.977 sys 0m0.904s 00:22:53.977 18:30:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:53.977 18:30:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:53.977 18:30:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:22:53.977 18:30:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:53.977 18:30:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:53.977 18:30:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:53.977 18:30:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.977 ************************************ 00:22:53.977 START TEST nvmf_perf 00:22:53.977 ************************************ 00:22:53.977 18:30:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:54.236 * Looking for test storage... 00:22:54.236 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=1e224894-a0fc-4112-b81b-a37606f50c96 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:54.236 Cannot find device "nvmf_tgt_br" 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # true 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:54.236 Cannot find device "nvmf_tgt_br2" 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # true 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:54.236 Cannot find device "nvmf_tgt_br" 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # true 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:54.236 Cannot find device "nvmf_tgt_br2" 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # true 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:54.236 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:54.236 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:54.237 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:22:54.237 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:54.237 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:54.237 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:22:54.237 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:54.237 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:54.237 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:54.237 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:54.237 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:54.237 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:54.237 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:54.237 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:54.237 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:54.237 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:54.495 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:54.495 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:54.495 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:54.495 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:54.495 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:54.495 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:54.495 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:54.495 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:54.495 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:54.495 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:54.495 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:54.495 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:54.495 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:54.495 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:54.495 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:54.495 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:22:54.495 00:22:54.495 --- 10.0.0.2 ping statistics --- 00:22:54.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.495 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:22:54.495 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:54.495 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:54.495 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:22:54.495 00:22:54.495 --- 10.0.0.3 ping statistics --- 00:22:54.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.495 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:22:54.495 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:54.495 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:54.495 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:22:54.495 00:22:54.495 --- 10.0.0.1 ping statistics --- 00:22:54.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.495 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:22:54.495 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:54.495 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:22:54.495 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:54.495 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:54.495 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:54.495 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:54.495 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:54.495 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:54.495 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:54.495 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:54.495 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:54.495 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:54.495 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:54.495 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=81282 00:22:54.495 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 81282 00:22:54.495 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:54.495 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 81282 ']' 00:22:54.495 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:54.495 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:54.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:54.495 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:54.495 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:54.496 18:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:54.496 [2024-07-22 18:30:06.495905] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:22:54.496 [2024-07-22 18:30:06.496043] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:54.754 [2024-07-22 18:30:06.666925] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:55.012 [2024-07-22 18:30:06.945172] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:55.012 [2024-07-22 18:30:06.945260] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:55.012 [2024-07-22 18:30:06.945278] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:55.012 [2024-07-22 18:30:06.945295] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:55.012 [2024-07-22 18:30:06.945310] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:55.012 [2024-07-22 18:30:06.945497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:55.012 [2024-07-22 18:30:06.946055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:55.012 [2024-07-22 18:30:06.946149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:55.012 [2024-07-22 18:30:06.946154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:55.268 [2024-07-22 18:30:07.150240] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:55.525 18:30:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:55.525 18:30:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:22:55.525 18:30:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:55.525 18:30:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:55.525 18:30:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:55.525 18:30:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:55.525 18:30:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:22:55.525 18:30:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:22:56.089 18:30:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:22:56.089 18:30:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:56.346 18:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:22:56.346 18:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:56.604 18:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:56.604 18:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:22:56.604 18:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:56.604 18:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:56.604 18:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:56.604 [2024-07-22 18:30:08.604823] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:56.863 18:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:56.863 18:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:56.863 18:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:57.121 18:30:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:57.121 18:30:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:57.378 18:30:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:57.646 [2024-07-22 18:30:09.527395] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:57.646 18:30:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:57.904 18:30:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:22:57.904 18:30:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:22:57.904 18:30:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:57.904 18:30:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:22:59.278 Initializing NVMe Controllers 00:22:59.278 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:22:59.278 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:22:59.278 Initialization complete. Launching workers. 00:22:59.278 ======================================================== 00:22:59.278 Latency(us) 00:22:59.278 Device Information : IOPS MiB/s Average min max 00:22:59.278 PCIE (0000:00:10.0) NSID 1 from core 0: 22677.85 88.59 1411.02 352.53 5948.35 00:22:59.278 ======================================================== 00:22:59.278 Total : 22677.85 88.59 1411.02 352.53 5948.35 00:22:59.278 00:22:59.278 18:30:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:00.651 Initializing NVMe Controllers 00:23:00.651 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:00.651 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:00.651 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:00.651 Initialization complete. Launching workers. 00:23:00.651 ======================================================== 00:23:00.651 Latency(us) 00:23:00.651 Device Information : IOPS MiB/s Average min max 00:23:00.651 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2401.00 9.38 410.85 151.07 5063.48 00:23:00.651 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.00 0.48 8116.80 7059.28 12035.77 00:23:00.651 ======================================================== 00:23:00.651 Total : 2525.00 9.86 789.28 151.07 12035.77 00:23:00.651 00:23:00.651 18:30:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:02.025 Initializing NVMe Controllers 00:23:02.025 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:02.025 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:02.025 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:02.025 Initialization complete. Launching workers. 00:23:02.025 ======================================================== 00:23:02.025 Latency(us) 00:23:02.025 Device Information : IOPS MiB/s Average min max 00:23:02.025 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6750.51 26.37 4740.84 828.01 12349.34 00:23:02.025 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3849.60 15.04 8323.43 5987.25 16392.03 00:23:02.025 ======================================================== 00:23:02.025 Total : 10600.11 41.41 6041.92 828.01 16392.03 00:23:02.025 00:23:02.025 18:30:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:23:02.025 18:30:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:05.306 Initializing NVMe Controllers 00:23:05.306 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:05.306 Controller IO queue size 128, less than required. 00:23:05.306 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:05.306 Controller IO queue size 128, less than required. 00:23:05.306 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:05.306 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:05.306 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:05.306 Initialization complete. Launching workers. 00:23:05.306 ======================================================== 00:23:05.306 Latency(us) 00:23:05.306 Device Information : IOPS MiB/s Average min max 00:23:05.306 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1223.17 305.79 109463.54 71158.62 308131.59 00:23:05.306 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 606.84 151.71 221419.72 102175.50 458007.19 00:23:05.306 ======================================================== 00:23:05.306 Total : 1830.01 457.50 146588.75 71158.62 458007.19 00:23:05.306 00:23:05.306 18:30:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:23:05.306 Initializing NVMe Controllers 00:23:05.306 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:05.306 Controller IO queue size 128, less than required. 00:23:05.306 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:05.306 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:05.306 Controller IO queue size 128, less than required. 00:23:05.306 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:05.306 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:23:05.306 WARNING: Some requested NVMe devices were skipped 00:23:05.306 No valid NVMe controllers or AIO or URING devices found 00:23:05.306 18:30:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:08.590 Initializing NVMe Controllers 00:23:08.590 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:08.590 Controller IO queue size 128, less than required. 00:23:08.590 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:08.590 Controller IO queue size 128, less than required. 00:23:08.590 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:08.590 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:08.590 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:08.590 Initialization complete. Launching workers. 00:23:08.590 00:23:08.590 ==================== 00:23:08.590 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:08.590 TCP transport: 00:23:08.590 polls: 5793 00:23:08.590 idle_polls: 2723 00:23:08.590 sock_completions: 3070 00:23:08.590 nvme_completions: 5115 00:23:08.590 submitted_requests: 7722 00:23:08.590 queued_requests: 1 00:23:08.590 00:23:08.590 ==================== 00:23:08.590 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:08.590 TCP transport: 00:23:08.590 polls: 8297 00:23:08.590 idle_polls: 5238 00:23:08.590 sock_completions: 3059 00:23:08.590 nvme_completions: 5199 00:23:08.590 submitted_requests: 7834 00:23:08.590 queued_requests: 1 00:23:08.590 ======================================================== 00:23:08.590 Latency(us) 00:23:08.590 Device Information : IOPS MiB/s Average min max 00:23:08.590 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1275.71 318.93 107655.53 52255.72 409145.43 00:23:08.590 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1296.66 324.16 98373.53 54343.91 306410.73 00:23:08.590 ======================================================== 00:23:08.590 Total : 2572.37 643.09 102976.73 52255.72 409145.43 00:23:08.590 00:23:08.590 18:30:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:08.590 18:30:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:08.590 18:30:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:23:08.590 18:30:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:00:10.0 ']' 00:23:08.590 18:30:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:23:08.848 18:30:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=f16c188f-3744-4f68-9357-778d6014df7e 00:23:08.848 18:30:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb f16c188f-3744-4f68-9357-778d6014df7e 00:23:08.848 18:30:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=f16c188f-3744-4f68-9357-778d6014df7e 00:23:08.848 18:30:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:23:08.848 18:30:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:23:08.848 18:30:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:23:08.848 18:30:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:09.106 18:30:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:23:09.106 { 00:23:09.106 "uuid": "f16c188f-3744-4f68-9357-778d6014df7e", 00:23:09.106 "name": "lvs_0", 00:23:09.106 "base_bdev": "Nvme0n1", 00:23:09.106 "total_data_clusters": 1278, 00:23:09.106 "free_clusters": 1278, 00:23:09.106 "block_size": 4096, 00:23:09.106 "cluster_size": 4194304 00:23:09.106 } 00:23:09.106 ]' 00:23:09.106 18:30:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="f16c188f-3744-4f68-9357-778d6014df7e") .free_clusters' 00:23:09.106 18:30:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=1278 00:23:09.106 18:30:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="f16c188f-3744-4f68-9357-778d6014df7e") .cluster_size' 00:23:09.364 5112 00:23:09.364 18:30:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:23:09.364 18:30:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=5112 00:23:09.364 18:30:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 5112 00:23:09.364 18:30:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:23:09.364 18:30:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u f16c188f-3744-4f68-9357-778d6014df7e lbd_0 5112 00:23:09.622 18:30:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=9fb019ab-d776-45bf-9129-32022d97b6e3 00:23:09.622 18:30:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 9fb019ab-d776-45bf-9129-32022d97b6e3 lvs_n_0 00:23:09.880 18:30:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=a5d6591c-f563-49e5-9ac3-f89024348ed9 00:23:09.880 18:30:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb a5d6591c-f563-49e5-9ac3-f89024348ed9 00:23:09.880 18:30:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=a5d6591c-f563-49e5-9ac3-f89024348ed9 00:23:09.880 18:30:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:23:09.880 18:30:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:23:09.880 18:30:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:23:09.880 18:30:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:10.138 18:30:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:23:10.138 { 00:23:10.138 "uuid": "f16c188f-3744-4f68-9357-778d6014df7e", 00:23:10.138 "name": "lvs_0", 00:23:10.138 "base_bdev": "Nvme0n1", 00:23:10.138 "total_data_clusters": 1278, 00:23:10.138 "free_clusters": 0, 00:23:10.138 "block_size": 4096, 00:23:10.138 "cluster_size": 4194304 00:23:10.138 }, 00:23:10.138 { 00:23:10.138 "uuid": "a5d6591c-f563-49e5-9ac3-f89024348ed9", 00:23:10.138 "name": "lvs_n_0", 00:23:10.138 "base_bdev": "9fb019ab-d776-45bf-9129-32022d97b6e3", 00:23:10.138 "total_data_clusters": 1276, 00:23:10.138 "free_clusters": 1276, 00:23:10.138 "block_size": 4096, 00:23:10.138 "cluster_size": 4194304 00:23:10.138 } 00:23:10.138 ]' 00:23:10.138 18:30:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="a5d6591c-f563-49e5-9ac3-f89024348ed9") .free_clusters' 00:23:10.138 18:30:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=1276 00:23:10.138 18:30:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="a5d6591c-f563-49e5-9ac3-f89024348ed9") .cluster_size' 00:23:10.396 5104 00:23:10.396 18:30:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:23:10.396 18:30:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=5104 00:23:10.396 18:30:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 5104 00:23:10.396 18:30:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:23:10.396 18:30:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a5d6591c-f563-49e5-9ac3-f89024348ed9 lbd_nest_0 5104 00:23:10.653 18:30:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=e028a080-611a-42ae-acc7-9a98acb02fb0 00:23:10.653 18:30:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:10.911 18:30:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:23:10.911 18:30:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 e028a080-611a-42ae-acc7-9a98acb02fb0 00:23:11.169 18:30:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:11.426 18:30:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:23:11.426 18:30:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:23:11.426 18:30:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:23:11.426 18:30:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:11.426 18:30:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:11.991 Initializing NVMe Controllers 00:23:11.991 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:11.991 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:23:11.991 WARNING: Some requested NVMe devices were skipped 00:23:11.991 No valid NVMe controllers or AIO or URING devices found 00:23:11.991 18:30:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:11.991 18:30:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:24.261 Initializing NVMe Controllers 00:23:24.261 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:24.261 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:24.261 Initialization complete. Launching workers. 00:23:24.261 ======================================================== 00:23:24.261 Latency(us) 00:23:24.261 Device Information : IOPS MiB/s Average min max 00:23:24.261 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 854.20 106.77 1169.99 403.51 7620.25 00:23:24.261 ======================================================== 00:23:24.261 Total : 854.20 106.77 1169.99 403.51 7620.25 00:23:24.261 00:23:24.261 18:30:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:23:24.261 18:30:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:24.261 18:30:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:24.261 Initializing NVMe Controllers 00:23:24.261 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:24.261 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:23:24.261 WARNING: Some requested NVMe devices were skipped 00:23:24.261 No valid NVMe controllers or AIO or URING devices found 00:23:24.261 18:30:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:24.261 18:30:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:34.268 Initializing NVMe Controllers 00:23:34.268 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:34.268 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:34.268 Initialization complete. Launching workers. 00:23:34.268 ======================================================== 00:23:34.268 Latency(us) 00:23:34.268 Device Information : IOPS MiB/s Average min max 00:23:34.268 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1351.74 168.97 23675.08 7583.30 64278.28 00:23:34.268 ======================================================== 00:23:34.268 Total : 1351.74 168.97 23675.08 7583.30 64278.28 00:23:34.268 00:23:34.268 18:30:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:23:34.268 18:30:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:34.268 18:30:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:34.268 Initializing NVMe Controllers 00:23:34.268 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:34.268 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:23:34.268 WARNING: Some requested NVMe devices were skipped 00:23:34.268 No valid NVMe controllers or AIO or URING devices found 00:23:34.268 18:30:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:34.268 18:30:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:44.259 Initializing NVMe Controllers 00:23:44.259 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:44.259 Controller IO queue size 128, less than required. 00:23:44.259 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:44.259 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:44.259 Initialization complete. Launching workers. 00:23:44.259 ======================================================== 00:23:44.259 Latency(us) 00:23:44.259 Device Information : IOPS MiB/s Average min max 00:23:44.259 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3275.90 409.49 39172.88 15061.29 103209.17 00:23:44.259 ======================================================== 00:23:44.259 Total : 3275.90 409.49 39172.88 15061.29 103209.17 00:23:44.259 00:23:44.259 18:30:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:44.259 18:30:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete e028a080-611a-42ae-acc7-9a98acb02fb0 00:23:44.826 18:30:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:23:45.084 18:30:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 9fb019ab-d776-45bf-9129-32022d97b6e3 00:23:45.341 18:30:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:23:45.604 18:30:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:45.604 18:30:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:23:45.604 18:30:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:45.604 18:30:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:23:45.604 18:30:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:45.604 18:30:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:23:45.604 18:30:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:45.604 18:30:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:45.604 rmmod nvme_tcp 00:23:45.604 rmmod nvme_fabrics 00:23:45.604 rmmod nvme_keyring 00:23:45.604 18:30:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:45.604 18:30:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:23:45.604 18:30:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:23:45.604 18:30:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 81282 ']' 00:23:45.604 18:30:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 81282 00:23:45.604 18:30:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 81282 ']' 00:23:45.604 18:30:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 81282 00:23:45.604 18:30:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:23:45.604 18:30:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:45.604 18:30:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81282 00:23:45.604 killing process with pid 81282 00:23:45.604 18:30:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:45.604 18:30:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:45.604 18:30:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81282' 00:23:45.604 18:30:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@967 -- # kill 81282 00:23:45.604 18:30:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # wait 81282 00:23:48.168 18:30:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:48.168 18:30:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:48.168 18:30:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:48.168 18:30:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:48.168 18:30:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:48.168 18:30:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:48.168 18:30:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:48.168 18:30:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:48.168 18:30:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:48.168 00:23:48.168 real 0m53.957s 00:23:48.168 user 3m22.257s 00:23:48.168 sys 0m12.944s 00:23:48.168 18:30:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:48.168 18:30:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:48.168 ************************************ 00:23:48.168 END TEST nvmf_perf 00:23:48.168 ************************************ 00:23:48.168 18:30:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:23:48.168 18:30:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:48.168 18:30:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:48.168 18:30:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:48.168 18:30:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.168 ************************************ 00:23:48.168 START TEST nvmf_fio_host 00:23:48.168 ************************************ 00:23:48.168 18:30:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:48.168 * Looking for test storage... 00:23:48.168 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:48.168 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:48.168 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:48.168 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:48.168 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:48.168 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.168 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.168 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.168 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:48.168 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.168 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:48.168 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:23:48.168 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:48.168 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:48.168 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:48.168 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:48.168 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:48.168 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:48.168 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:48.168 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:48.168 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:48.168 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:48.168 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:23:48.168 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=1e224894-a0fc-4112-b81b-a37606f50c96 00:23:48.168 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:48.168 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:48.168 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:48.168 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:48.168 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:48.168 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:48.168 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:48.168 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:48.169 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.169 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.169 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.169 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:48.169 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.169 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:23:48.169 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:48.169 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:48.169 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:48.169 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:48.169 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:48.169 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:48.169 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:48.169 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:48.169 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:48.169 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:23:48.169 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:48.169 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:48.169 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:48.169 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:48.169 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:48.169 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:48.169 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:48.169 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:48.169 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:48.169 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:48.169 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:48.169 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:48.169 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:48.169 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:48.169 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:48.169 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:48.169 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:48.169 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:48.169 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:48.169 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:48.169 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:48.169 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:48.169 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:48.169 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:48.169 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:48.169 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:48.169 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:48.169 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:48.169 Cannot find device "nvmf_tgt_br" 00:23:48.169 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:23:48.169 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:48.169 Cannot find device "nvmf_tgt_br2" 00:23:48.169 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:23:48.169 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:48.169 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:48.169 Cannot find device "nvmf_tgt_br" 00:23:48.169 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:23:48.169 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:48.169 Cannot find device "nvmf_tgt_br2" 00:23:48.169 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:23:48.169 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:48.427 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:48.427 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:48.427 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:48.427 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:23:48.427 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:48.427 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:48.427 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:23:48.427 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:48.427 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:48.427 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:48.427 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:48.427 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:48.427 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:48.427 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:48.427 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:48.427 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:48.427 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:48.427 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:48.427 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:48.427 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:48.427 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:48.427 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:48.427 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:48.427 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:48.427 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:48.427 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:48.427 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:48.427 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:48.427 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:48.427 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:48.427 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:48.427 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:48.427 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.118 ms 00:23:48.427 00:23:48.427 --- 10.0.0.2 ping statistics --- 00:23:48.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:48.427 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:23:48.427 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:48.427 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:48.427 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:23:48.427 00:23:48.427 --- 10.0.0.3 ping statistics --- 00:23:48.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:48.427 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:23:48.427 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:48.427 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:48.427 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:23:48.427 00:23:48.427 --- 10.0.0.1 ping statistics --- 00:23:48.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:48.427 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:23:48.427 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:48.427 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:23:48.427 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:48.428 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:48.428 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:48.428 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:48.428 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:48.428 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:48.428 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:48.686 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:23:48.686 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:23:48.686 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:48.686 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.686 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=82124 00:23:48.686 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:48.686 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:48.686 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 82124 00:23:48.686 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 82124 ']' 00:23:48.686 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:48.686 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:48.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:48.686 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:48.686 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:48.686 18:31:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.686 [2024-07-22 18:31:00.556881] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:23:48.686 [2024-07-22 18:31:00.557050] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:48.944 [2024-07-22 18:31:00.726572] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:49.201 [2024-07-22 18:31:01.025958] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:49.201 [2024-07-22 18:31:01.026024] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:49.201 [2024-07-22 18:31:01.026043] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:49.201 [2024-07-22 18:31:01.026065] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:49.201 [2024-07-22 18:31:01.026080] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:49.201 [2024-07-22 18:31:01.026852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:49.201 [2024-07-22 18:31:01.027001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:49.201 [2024-07-22 18:31:01.027079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:49.201 [2024-07-22 18:31:01.027096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:49.459 [2024-07-22 18:31:01.233746] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:23:49.717 18:31:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:49.717 18:31:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:23:49.717 18:31:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:49.975 [2024-07-22 18:31:01.817701] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:49.975 18:31:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:23:49.975 18:31:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:49.975 18:31:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.975 18:31:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:23:50.234 Malloc1 00:23:50.234 18:31:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:50.801 18:31:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:51.059 18:31:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:51.059 [2024-07-22 18:31:03.064423] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:51.318 18:31:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:51.576 18:31:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:23:51.576 18:31:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:51.576 18:31:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:51.576 18:31:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:51.576 18:31:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:51.576 18:31:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:51.576 18:31:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:51.576 18:31:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:23:51.576 18:31:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:51.576 18:31:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:51.576 18:31:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:23:51.576 18:31:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:51.576 18:31:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:51.576 18:31:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:51.576 18:31:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:51.576 18:31:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:23:51.576 18:31:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:23:51.576 18:31:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:51.576 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:51.576 fio-3.35 00:23:51.576 Starting 1 thread 00:23:54.141 00:23:54.141 test: (groupid=0, jobs=1): err= 0: pid=82199: Mon Jul 22 18:31:05 2024 00:23:54.141 read: IOPS=6841, BW=26.7MiB/s (28.0MB/s)(53.7MiB/2008msec) 00:23:54.141 slat (usec): min=2, max=328, avg= 3.52, stdev= 3.73 00:23:54.141 clat (usec): min=2674, max=17359, avg=9703.91, stdev=836.25 00:23:54.141 lat (usec): min=2738, max=17362, avg=9707.43, stdev=836.11 00:23:54.141 clat percentiles (usec): 00:23:54.141 | 1.00th=[ 7504], 5.00th=[ 8586], 10.00th=[ 8848], 20.00th=[ 9110], 00:23:54.142 | 30.00th=[ 9372], 40.00th=[ 9503], 50.00th=[ 9634], 60.00th=[ 9896], 00:23:54.142 | 70.00th=[10028], 80.00th=[10159], 90.00th=[10552], 95.00th=[10814], 00:23:54.142 | 99.00th=[12256], 99.50th=[12911], 99.90th=[15664], 99.95th=[16909], 00:23:54.142 | 99.99th=[17433] 00:23:54.142 bw ( KiB/s): min=26176, max=28376, per=99.95%, avg=27350.00, stdev=941.11, samples=4 00:23:54.142 iops : min= 6544, max= 7094, avg=6838.00, stdev=235.56, samples=4 00:23:54.142 write: IOPS=6851, BW=26.8MiB/s (28.1MB/s)(53.7MiB/2008msec); 0 zone resets 00:23:54.142 slat (usec): min=2, max=235, avg= 3.62, stdev= 2.54 00:23:54.142 clat (usec): min=2511, max=17189, avg=8889.62, stdev=810.38 00:23:54.142 lat (usec): min=2526, max=17192, avg=8893.24, stdev=810.49 00:23:54.142 clat percentiles (usec): 00:23:54.142 | 1.00th=[ 6915], 5.00th=[ 7832], 10.00th=[ 8160], 20.00th=[ 8455], 00:23:54.142 | 30.00th=[ 8586], 40.00th=[ 8717], 50.00th=[ 8848], 60.00th=[ 8979], 00:23:54.142 | 70.00th=[ 9110], 80.00th=[ 9372], 90.00th=[ 9634], 95.00th=[10028], 00:23:54.142 | 99.00th=[11600], 99.50th=[12256], 99.90th=[15008], 99.95th=[15795], 00:23:54.142 | 99.99th=[16188] 00:23:54.142 bw ( KiB/s): min=27264, max=27592, per=99.92%, avg=27384.00, stdev=143.55, samples=4 00:23:54.142 iops : min= 6816, max= 6898, avg=6846.00, stdev=35.89, samples=4 00:23:54.142 lat (msec) : 4=0.07%, 10=82.30%, 20=17.63% 00:23:54.142 cpu : usr=71.80%, sys=20.43%, ctx=3, majf=0, minf=1539 00:23:54.142 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:23:54.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:54.142 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:54.142 issued rwts: total=13737,13758,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:54.142 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:54.142 00:23:54.142 Run status group 0 (all jobs): 00:23:54.142 READ: bw=26.7MiB/s (28.0MB/s), 26.7MiB/s-26.7MiB/s (28.0MB/s-28.0MB/s), io=53.7MiB (56.3MB), run=2008-2008msec 00:23:54.142 WRITE: bw=26.8MiB/s (28.1MB/s), 26.8MiB/s-26.8MiB/s (28.1MB/s-28.1MB/s), io=53.7MiB (56.4MB), run=2008-2008msec 00:23:54.400 ----------------------------------------------------- 00:23:54.400 Suppressions used: 00:23:54.400 count bytes template 00:23:54.400 1 57 /usr/src/fio/parse.c 00:23:54.400 1 8 libtcmalloc_minimal.so 00:23:54.400 ----------------------------------------------------- 00:23:54.400 00:23:54.400 18:31:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:54.400 18:31:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:54.400 18:31:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:54.400 18:31:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:54.400 18:31:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:54.400 18:31:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:54.400 18:31:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:23:54.400 18:31:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:54.400 18:31:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:54.400 18:31:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:54.400 18:31:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:23:54.400 18:31:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:54.400 18:31:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:54.400 18:31:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:54.400 18:31:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:23:54.400 18:31:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:23:54.400 18:31:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:54.400 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:23:54.400 fio-3.35 00:23:54.400 Starting 1 thread 00:23:56.930 00:23:56.930 test: (groupid=0, jobs=1): err= 0: pid=82240: Mon Jul 22 18:31:08 2024 00:23:56.930 read: IOPS=6116, BW=95.6MiB/s (100MB/s)(192MiB/2010msec) 00:23:56.930 slat (usec): min=4, max=177, avg= 5.70, stdev= 3.07 00:23:56.930 clat (usec): min=2505, max=24842, avg=11576.83, stdev=3381.39 00:23:56.930 lat (usec): min=2512, max=24849, avg=11582.53, stdev=3381.77 00:23:56.930 clat percentiles (usec): 00:23:56.930 | 1.00th=[ 6390], 5.00th=[ 7111], 10.00th=[ 7767], 20.00th=[ 8717], 00:23:56.930 | 30.00th=[ 9372], 40.00th=[10159], 50.00th=[10945], 60.00th=[11731], 00:23:56.930 | 70.00th=[13042], 80.00th=[14222], 90.00th=[16450], 95.00th=[18220], 00:23:56.930 | 99.00th=[20841], 99.50th=[22676], 99.90th=[24249], 99.95th=[24511], 00:23:56.930 | 99.99th=[24773] 00:23:56.930 bw ( KiB/s): min=37280, max=56896, per=49.84%, avg=48776.00, stdev=9038.57, samples=4 00:23:56.930 iops : min= 2330, max= 3556, avg=3048.50, stdev=564.91, samples=4 00:23:56.930 write: IOPS=3601, BW=56.3MiB/s (59.0MB/s)(100MiB/1781msec); 0 zone resets 00:23:56.930 slat (usec): min=36, max=725, avg=43.21, stdev=12.03 00:23:56.930 clat (usec): min=5747, max=30051, avg=16747.28, stdev=2829.28 00:23:56.930 lat (usec): min=5788, max=30096, avg=16790.49, stdev=2830.36 00:23:56.930 clat percentiles (usec): 00:23:56.930 | 1.00th=[11076], 5.00th=[12780], 10.00th=[13566], 20.00th=[14484], 00:23:56.930 | 30.00th=[15139], 40.00th=[15795], 50.00th=[16319], 60.00th=[17171], 00:23:56.930 | 70.00th=[17957], 80.00th=[18744], 90.00th=[20579], 95.00th=[21890], 00:23:56.930 | 99.00th=[23987], 99.50th=[25822], 99.90th=[29492], 99.95th=[29754], 00:23:56.930 | 99.99th=[30016] 00:23:56.930 bw ( KiB/s): min=38944, max=59360, per=87.80%, avg=50600.00, stdev=9159.17, samples=4 00:23:56.930 iops : min= 2434, max= 3710, avg=3162.50, stdev=572.45, samples=4 00:23:56.930 lat (msec) : 4=0.08%, 10=24.84%, 20=69.46%, 50=5.62% 00:23:56.930 cpu : usr=78.56%, sys=15.42%, ctx=25, majf=0, minf=2076 00:23:56.930 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:23:56.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:56.931 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:56.931 issued rwts: total=12294,6415,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:56.931 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:56.931 00:23:56.931 Run status group 0 (all jobs): 00:23:56.931 READ: bw=95.6MiB/s (100MB/s), 95.6MiB/s-95.6MiB/s (100MB/s-100MB/s), io=192MiB (201MB), run=2010-2010msec 00:23:56.931 WRITE: bw=56.3MiB/s (59.0MB/s), 56.3MiB/s-56.3MiB/s (59.0MB/s-59.0MB/s), io=100MiB (105MB), run=1781-1781msec 00:23:57.188 ----------------------------------------------------- 00:23:57.188 Suppressions used: 00:23:57.188 count bytes template 00:23:57.188 1 57 /usr/src/fio/parse.c 00:23:57.188 255 24480 /usr/src/fio/iolog.c 00:23:57.188 1 8 libtcmalloc_minimal.so 00:23:57.188 ----------------------------------------------------- 00:23:57.188 00:23:57.188 18:31:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:57.446 18:31:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:23:57.446 18:31:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:23:57.446 18:31:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:23:57.446 18:31:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 00:23:57.446 18:31:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 00:23:57.446 18:31:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:23:57.446 18:31:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:23:57.446 18:31:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:23:57.446 18:31:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:23:57.446 18:31:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:23:57.446 18:31:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 -i 10.0.0.2 00:23:57.704 Nvme0n1 00:23:57.704 18:31:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:23:58.269 18:31:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=f29acfff-d6a4-4d37-878d-0ae1c4bde465 00:23:58.269 18:31:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb f29acfff-d6a4-4d37-878d-0ae1c4bde465 00:23:58.269 18:31:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=f29acfff-d6a4-4d37-878d-0ae1c4bde465 00:23:58.269 18:31:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:23:58.269 18:31:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:23:58.269 18:31:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:23:58.269 18:31:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:58.269 18:31:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:23:58.269 { 00:23:58.269 "uuid": "f29acfff-d6a4-4d37-878d-0ae1c4bde465", 00:23:58.269 "name": "lvs_0", 00:23:58.269 "base_bdev": "Nvme0n1", 00:23:58.269 "total_data_clusters": 4, 00:23:58.269 "free_clusters": 4, 00:23:58.269 "block_size": 4096, 00:23:58.269 "cluster_size": 1073741824 00:23:58.269 } 00:23:58.269 ]' 00:23:58.269 18:31:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="f29acfff-d6a4-4d37-878d-0ae1c4bde465") .free_clusters' 00:23:58.269 18:31:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=4 00:23:58.269 18:31:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="f29acfff-d6a4-4d37-878d-0ae1c4bde465") .cluster_size' 00:23:58.528 18:31:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:23:58.528 18:31:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=4096 00:23:58.528 4096 00:23:58.528 18:31:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 4096 00:23:58.528 18:31:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:23:58.786 04ff79cc-19ee-4f86-bec1-30cbd2047a1b 00:23:58.786 18:31:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:23:59.044 18:31:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:23:59.302 18:31:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:59.561 18:31:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:59.561 18:31:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:59.561 18:31:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:59.561 18:31:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:59.561 18:31:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:59.561 18:31:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:59.561 18:31:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:23:59.561 18:31:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:59.561 18:31:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:59.561 18:31:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:59.561 18:31:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:23:59.561 18:31:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:59.561 18:31:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:59.561 18:31:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:59.561 18:31:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:23:59.561 18:31:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:23:59.561 18:31:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:59.819 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:59.819 fio-3.35 00:23:59.819 Starting 1 thread 00:24:02.349 00:24:02.349 test: (groupid=0, jobs=1): err= 0: pid=82342: Mon Jul 22 18:31:13 2024 00:24:02.349 read: IOPS=4960, BW=19.4MiB/s (20.3MB/s)(38.9MiB/2010msec) 00:24:02.349 slat (usec): min=2, max=184, avg= 3.73, stdev= 2.73 00:24:02.349 clat (usec): min=3338, max=23983, avg=13421.46, stdev=1330.18 00:24:02.349 lat (usec): min=3343, max=23987, avg=13425.20, stdev=1330.14 00:24:02.349 clat percentiles (usec): 00:24:02.349 | 1.00th=[10945], 5.00th=[11731], 10.00th=[11994], 20.00th=[12387], 00:24:02.349 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13304], 60.00th=[13566], 00:24:02.349 | 70.00th=[13960], 80.00th=[14353], 90.00th=[14877], 95.00th=[15533], 00:24:02.349 | 99.00th=[17433], 99.50th=[18744], 99.90th=[20579], 99.95th=[22152], 00:24:02.349 | 99.99th=[23987] 00:24:02.349 bw ( KiB/s): min=19216, max=20224, per=99.82%, avg=19808.00, stdev=456.63, samples=4 00:24:02.349 iops : min= 4804, max= 5056, avg=4952.00, stdev=114.16, samples=4 00:24:02.349 write: IOPS=4956, BW=19.4MiB/s (20.3MB/s)(38.9MiB/2010msec); 0 zone resets 00:24:02.349 slat (usec): min=2, max=138, avg= 4.01, stdev= 2.11 00:24:02.349 clat (usec): min=2039, max=22336, avg=12219.01, stdev=1264.45 00:24:02.349 lat (usec): min=2049, max=22340, avg=12223.02, stdev=1264.54 00:24:02.349 clat percentiles (usec): 00:24:02.349 | 1.00th=[ 9896], 5.00th=[10552], 10.00th=[10945], 20.00th=[11338], 00:24:02.349 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12125], 60.00th=[12387], 00:24:02.349 | 70.00th=[12649], 80.00th=[13042], 90.00th=[13566], 95.00th=[14091], 00:24:02.349 | 99.00th=[16319], 99.50th=[17433], 99.90th=[20579], 99.95th=[21890], 00:24:02.349 | 99.99th=[22414] 00:24:02.349 bw ( KiB/s): min=19225, max=20104, per=99.79%, avg=19784.25, stdev=398.52, samples=4 00:24:02.349 iops : min= 4806, max= 5026, avg=4946.00, stdev=99.75, samples=4 00:24:02.350 lat (msec) : 4=0.07%, 10=0.72%, 20=99.04%, 50=0.17% 00:24:02.350 cpu : usr=73.62%, sys=19.96%, ctx=15, majf=0, minf=1538 00:24:02.350 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:24:02.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.350 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:02.350 issued rwts: total=9971,9962,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:02.350 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:02.350 00:24:02.350 Run status group 0 (all jobs): 00:24:02.350 READ: bw=19.4MiB/s (20.3MB/s), 19.4MiB/s-19.4MiB/s (20.3MB/s-20.3MB/s), io=38.9MiB (40.8MB), run=2010-2010msec 00:24:02.350 WRITE: bw=19.4MiB/s (20.3MB/s), 19.4MiB/s-19.4MiB/s (20.3MB/s-20.3MB/s), io=38.9MiB (40.8MB), run=2010-2010msec 00:24:02.350 ----------------------------------------------------- 00:24:02.350 Suppressions used: 00:24:02.350 count bytes template 00:24:02.350 1 58 /usr/src/fio/parse.c 00:24:02.350 1 8 libtcmalloc_minimal.so 00:24:02.350 ----------------------------------------------------- 00:24:02.350 00:24:02.350 18:31:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:02.609 18:31:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:24:02.867 18:31:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=4e02216b-28a2-44e2-ac50-bbf2203007a2 00:24:02.867 18:31:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 4e02216b-28a2-44e2-ac50-bbf2203007a2 00:24:02.867 18:31:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=4e02216b-28a2-44e2-ac50-bbf2203007a2 00:24:02.867 18:31:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:24:02.867 18:31:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:24:02.867 18:31:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:24:02.867 18:31:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:03.126 18:31:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:24:03.126 { 00:24:03.126 "uuid": "f29acfff-d6a4-4d37-878d-0ae1c4bde465", 00:24:03.126 "name": "lvs_0", 00:24:03.126 "base_bdev": "Nvme0n1", 00:24:03.126 "total_data_clusters": 4, 00:24:03.126 "free_clusters": 0, 00:24:03.126 "block_size": 4096, 00:24:03.126 "cluster_size": 1073741824 00:24:03.126 }, 00:24:03.126 { 00:24:03.126 "uuid": "4e02216b-28a2-44e2-ac50-bbf2203007a2", 00:24:03.126 "name": "lvs_n_0", 00:24:03.126 "base_bdev": "04ff79cc-19ee-4f86-bec1-30cbd2047a1b", 00:24:03.126 "total_data_clusters": 1022, 00:24:03.126 "free_clusters": 1022, 00:24:03.126 "block_size": 4096, 00:24:03.126 "cluster_size": 4194304 00:24:03.126 } 00:24:03.126 ]' 00:24:03.126 18:31:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="4e02216b-28a2-44e2-ac50-bbf2203007a2") .free_clusters' 00:24:03.126 18:31:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=1022 00:24:03.126 18:31:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="4e02216b-28a2-44e2-ac50-bbf2203007a2") .cluster_size' 00:24:03.126 4088 00:24:03.126 18:31:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:24:03.126 18:31:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=4088 00:24:03.126 18:31:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 4088 00:24:03.126 18:31:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:24:03.384 4165fdd5-c4fc-4214-b6ce-3f29072e1d15 00:24:03.384 18:31:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:24:03.642 18:31:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:24:03.900 18:31:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:24:04.158 18:31:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:04.158 18:31:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:04.158 18:31:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:04.158 18:31:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:04.158 18:31:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:04.158 18:31:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:04.158 18:31:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:04.159 18:31:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:04.159 18:31:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:04.159 18:31:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:04.159 18:31:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:04.159 18:31:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:04.159 18:31:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:24:04.159 18:31:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:24:04.159 18:31:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:24:04.159 18:31:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:24:04.159 18:31:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:04.417 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:04.417 fio-3.35 00:24:04.417 Starting 1 thread 00:24:06.945 00:24:06.945 test: (groupid=0, jobs=1): err= 0: pid=82415: Mon Jul 22 18:31:18 2024 00:24:06.945 read: IOPS=4393, BW=17.2MiB/s (18.0MB/s)(34.6MiB/2013msec) 00:24:06.945 slat (usec): min=3, max=233, avg= 3.77, stdev= 3.44 00:24:06.945 clat (usec): min=4305, max=26119, avg=15175.30, stdev=1506.57 00:24:06.945 lat (usec): min=4318, max=26123, avg=15179.08, stdev=1506.44 00:24:06.945 clat percentiles (usec): 00:24:06.945 | 1.00th=[12256], 5.00th=[13042], 10.00th=[13435], 20.00th=[13960], 00:24:06.945 | 30.00th=[14353], 40.00th=[14746], 50.00th=[15139], 60.00th=[15401], 00:24:06.945 | 70.00th=[15795], 80.00th=[16319], 90.00th=[16909], 95.00th=[17695], 00:24:06.945 | 99.00th=[19006], 99.50th=[19792], 99.90th=[22938], 99.95th=[24773], 00:24:06.945 | 99.99th=[26084] 00:24:06.945 bw ( KiB/s): min=15904, max=18624, per=99.97%, avg=17570.00, stdev=1174.53, samples=4 00:24:06.945 iops : min= 3976, max= 4656, avg=4392.50, stdev=293.63, samples=4 00:24:06.945 write: IOPS=4397, BW=17.2MiB/s (18.0MB/s)(34.6MiB/2013msec); 0 zone resets 00:24:06.945 slat (usec): min=3, max=161, avg= 3.95, stdev= 2.11 00:24:06.945 clat (usec): min=2659, max=24534, avg=13751.52, stdev=1428.15 00:24:06.946 lat (usec): min=2671, max=24538, avg=13755.47, stdev=1428.21 00:24:06.946 clat percentiles (usec): 00:24:06.946 | 1.00th=[10945], 5.00th=[11731], 10.00th=[12125], 20.00th=[12649], 00:24:06.946 | 30.00th=[13042], 40.00th=[13304], 50.00th=[13698], 60.00th=[14091], 00:24:06.946 | 70.00th=[14353], 80.00th=[14746], 90.00th=[15533], 95.00th=[16057], 00:24:06.946 | 99.00th=[17171], 99.50th=[18482], 99.90th=[22676], 99.95th=[22938], 00:24:06.946 | 99.99th=[24511] 00:24:06.946 bw ( KiB/s): min=16840, max=18008, per=99.89%, avg=17570.00, stdev=551.07, samples=4 00:24:06.946 iops : min= 4210, max= 4502, avg=4392.50, stdev=137.77, samples=4 00:24:06.946 lat (msec) : 4=0.01%, 10=0.32%, 20=99.30%, 50=0.37% 00:24:06.946 cpu : usr=73.56%, sys=21.22%, ctx=4, majf=0, minf=1539 00:24:06.946 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:24:06.946 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:06.946 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:06.946 issued rwts: total=8845,8852,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:06.946 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:06.946 00:24:06.946 Run status group 0 (all jobs): 00:24:06.946 READ: bw=17.2MiB/s (18.0MB/s), 17.2MiB/s-17.2MiB/s (18.0MB/s-18.0MB/s), io=34.6MiB (36.2MB), run=2013-2013msec 00:24:06.946 WRITE: bw=17.2MiB/s (18.0MB/s), 17.2MiB/s-17.2MiB/s (18.0MB/s-18.0MB/s), io=34.6MiB (36.3MB), run=2013-2013msec 00:24:06.946 ----------------------------------------------------- 00:24:06.946 Suppressions used: 00:24:06.946 count bytes template 00:24:06.946 1 58 /usr/src/fio/parse.c 00:24:06.946 1 8 libtcmalloc_minimal.so 00:24:06.946 ----------------------------------------------------- 00:24:06.946 00:24:06.946 18:31:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:24:07.204 18:31:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:24:07.462 18:31:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:24:07.720 18:31:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:24:07.978 18:31:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:24:08.290 18:31:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:24:08.290 18:31:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:24:08.858 18:31:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:08.858 18:31:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:08.858 18:31:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:08.858 18:31:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:08.858 18:31:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:24:08.858 18:31:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:08.858 18:31:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:24:08.858 18:31:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:08.858 18:31:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:08.858 rmmod nvme_tcp 00:24:08.858 rmmod nvme_fabrics 00:24:08.858 rmmod nvme_keyring 00:24:08.858 18:31:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:08.858 18:31:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:24:08.858 18:31:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:24:08.858 18:31:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 82124 ']' 00:24:08.858 18:31:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 82124 00:24:08.858 18:31:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 82124 ']' 00:24:08.858 18:31:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 82124 00:24:08.858 18:31:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:24:08.858 18:31:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:08.858 18:31:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82124 00:24:08.858 killing process with pid 82124 00:24:08.858 18:31:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:08.858 18:31:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:08.858 18:31:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82124' 00:24:08.858 18:31:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 82124 00:24:08.858 18:31:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 82124 00:24:10.759 18:31:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:10.759 18:31:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:10.759 18:31:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:10.759 18:31:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:10.759 18:31:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:10.759 18:31:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:10.759 18:31:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:10.759 18:31:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:10.759 18:31:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:10.759 00:24:10.759 real 0m22.392s 00:24:10.759 user 1m36.258s 00:24:10.759 sys 0m4.891s 00:24:10.759 18:31:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:10.759 18:31:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.759 ************************************ 00:24:10.759 END TEST nvmf_fio_host 00:24:10.759 ************************************ 00:24:10.759 18:31:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.760 ************************************ 00:24:10.760 START TEST nvmf_failover 00:24:10.760 ************************************ 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:10.760 * Looking for test storage... 00:24:10.760 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=1e224894-a0fc-4112-b81b-a37606f50c96 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:10.760 Cannot find device "nvmf_tgt_br" 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # true 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:10.760 Cannot find device "nvmf_tgt_br2" 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # true 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:10.760 Cannot find device "nvmf_tgt_br" 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # true 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:10.760 Cannot find device "nvmf_tgt_br2" 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # true 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:10.760 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:24:10.760 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:10.761 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:10.761 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:24:10.761 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:10.761 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:10.761 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:10.761 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:10.761 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:10.761 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:10.761 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:10.761 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:10.761 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:10.761 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:10.761 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:10.761 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:10.761 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:11.020 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:11.020 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:11.020 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:11.020 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:11.020 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:11.020 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:11.020 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:11.020 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:11.020 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:11.020 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:11.020 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:11.020 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:11.020 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:24:11.020 00:24:11.020 --- 10.0.0.2 ping statistics --- 00:24:11.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:11.020 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:24:11.020 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:11.020 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:11.020 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:24:11.020 00:24:11.020 --- 10.0.0.3 ping statistics --- 00:24:11.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:11.020 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:24:11.020 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:11.020 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:11.020 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:24:11.020 00:24:11.020 --- 10.0.0.1 ping statistics --- 00:24:11.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:11.020 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:24:11.020 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:11.020 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:24:11.020 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:11.020 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:11.020 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:11.020 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:11.020 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:11.020 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:11.020 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:11.020 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:11.020 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:11.020 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:11.020 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:11.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:11.020 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=82666 00:24:11.020 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 82666 00:24:11.020 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 82666 ']' 00:24:11.020 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:11.020 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:11.020 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:11.020 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:11.020 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:11.020 18:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:11.278 [2024-07-22 18:31:23.046044] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:24:11.278 [2024-07-22 18:31:23.046251] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:11.278 [2024-07-22 18:31:23.229012] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:11.536 [2024-07-22 18:31:23.478854] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:11.536 [2024-07-22 18:31:23.478932] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:11.536 [2024-07-22 18:31:23.478950] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:11.536 [2024-07-22 18:31:23.478965] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:11.536 [2024-07-22 18:31:23.478977] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:11.536 [2024-07-22 18:31:23.479744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:11.536 [2024-07-22 18:31:23.479884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:11.536 [2024-07-22 18:31:23.479896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:11.794 [2024-07-22 18:31:23.693607] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:12.052 18:31:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:12.052 18:31:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:24:12.052 18:31:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:12.052 18:31:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:12.052 18:31:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:12.052 18:31:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:12.052 18:31:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:12.311 [2024-07-22 18:31:24.228550] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:12.311 18:31:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:12.569 Malloc0 00:24:12.569 18:31:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:12.827 18:31:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:13.085 18:31:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:13.343 [2024-07-22 18:31:25.285034] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:13.343 18:31:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:13.602 [2024-07-22 18:31:25.517260] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:13.602 18:31:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:13.860 [2024-07-22 18:31:25.789586] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:13.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:13.860 18:31:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=82724 00:24:13.860 18:31:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:13.861 18:31:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:13.861 18:31:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 82724 /var/tmp/bdevperf.sock 00:24:13.861 18:31:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 82724 ']' 00:24:13.861 18:31:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:13.861 18:31:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:13.861 18:31:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:13.861 18:31:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:13.861 18:31:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:15.239 18:31:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:15.239 18:31:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:24:15.239 18:31:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:15.497 NVMe0n1 00:24:15.497 18:31:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:15.755 00:24:15.755 18:31:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=82753 00:24:15.755 18:31:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:15.755 18:31:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:16.689 18:31:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:16.947 [2024-07-22 18:31:28.829344] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:24:16.948 [2024-07-22 18:31:28.830607] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:24:16.948 [2024-07-22 18:31:28.830809] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:24:16.948 18:31:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:20.228 18:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:20.228 00:24:20.228 18:31:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:20.537 18:31:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:23.829 18:31:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:23.829 [2024-07-22 18:31:35.775356] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:23.829 18:31:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:25.207 18:31:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:25.207 18:31:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 82753 00:24:31.804 0 00:24:31.804 18:31:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 82724 00:24:31.804 18:31:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 82724 ']' 00:24:31.804 18:31:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 82724 00:24:31.804 18:31:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:24:31.804 18:31:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:31.804 18:31:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82724 00:24:31.804 killing process with pid 82724 00:24:31.804 18:31:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:31.804 18:31:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:31.804 18:31:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82724' 00:24:31.804 18:31:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 82724 00:24:31.804 18:31:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 82724 00:24:32.379 18:31:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:32.379 [2024-07-22 18:31:25.928638] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:24:32.379 [2024-07-22 18:31:25.928858] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82724 ] 00:24:32.379 [2024-07-22 18:31:26.117609] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:32.379 [2024-07-22 18:31:26.422876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:32.380 [2024-07-22 18:31:26.629348] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:32.380 Running I/O for 15 seconds... 00:24:32.380 [2024-07-22 18:31:28.829578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.380 [2024-07-22 18:31:28.829647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-22 18:31:28.829679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.380 [2024-07-22 18:31:28.829699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-22 18:31:28.829722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.380 [2024-07-22 18:31:28.829742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-22 18:31:28.829764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.380 [2024-07-22 18:31:28.829782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-22 18:31:28.829803] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(5) to be set 00:24:32.380 [2024-07-22 18:31:28.830988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:62376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.380 [2024-07-22 18:31:28.831040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-22 18:31:28.831086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:62384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.380 [2024-07-22 18:31:28.831112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-22 18:31:28.831136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:62392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.380 [2024-07-22 18:31:28.831158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-22 18:31:28.831181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:62400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.380 [2024-07-22 18:31:28.831219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-22 18:31:28.831247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:62920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.380 [2024-07-22 18:31:28.831272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-22 18:31:28.831296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:62928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.380 [2024-07-22 18:31:28.831318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-22 18:31:28.831367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:62936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.380 [2024-07-22 18:31:28.831391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-22 18:31:28.831414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:62944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.380 [2024-07-22 18:31:28.831437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-22 18:31:28.831458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:62952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.380 [2024-07-22 18:31:28.831480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-22 18:31:28.831509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:62960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.380 [2024-07-22 18:31:28.831531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-22 18:31:28.831553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:62968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.380 [2024-07-22 18:31:28.831575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-22 18:31:28.831597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:62976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.380 [2024-07-22 18:31:28.831622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-22 18:31:28.831645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:62408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.380 [2024-07-22 18:31:28.831667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-22 18:31:28.831689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:62416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.380 [2024-07-22 18:31:28.831711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-22 18:31:28.831733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:62424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.380 [2024-07-22 18:31:28.831755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-22 18:31:28.831778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:62432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.380 [2024-07-22 18:31:28.831799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-22 18:31:28.831821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:62440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.380 [2024-07-22 18:31:28.831843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-22 18:31:28.831864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:62448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.380 [2024-07-22 18:31:28.831888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-22 18:31:28.831911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:62456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.380 [2024-07-22 18:31:28.831933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-22 18:31:28.831964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:62464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.380 [2024-07-22 18:31:28.831990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-22 18:31:28.832013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:62472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.380 [2024-07-22 18:31:28.832035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-22 18:31:28.832057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:62480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.380 [2024-07-22 18:31:28.832080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-22 18:31:28.832102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:62488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.380 [2024-07-22 18:31:28.832124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-22 18:31:28.832146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:62496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.380 [2024-07-22 18:31:28.832167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-22 18:31:28.832189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:62504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.381 [2024-07-22 18:31:28.832224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-22 18:31:28.832249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:62512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.381 [2024-07-22 18:31:28.832271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-22 18:31:28.832294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:62520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.381 [2024-07-22 18:31:28.832315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-22 18:31:28.832338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:62528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.381 [2024-07-22 18:31:28.832362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-22 18:31:28.832385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:62984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.381 [2024-07-22 18:31:28.832407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-22 18:31:28.832429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:62992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.381 [2024-07-22 18:31:28.832453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-22 18:31:28.832475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:63000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.381 [2024-07-22 18:31:28.832497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-22 18:31:28.832519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:63008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.381 [2024-07-22 18:31:28.832550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-22 18:31:28.832574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:63016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.381 [2024-07-22 18:31:28.832596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-22 18:31:28.832618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:63024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.381 [2024-07-22 18:31:28.832640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-22 18:31:28.832662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:63032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.381 [2024-07-22 18:31:28.832684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-22 18:31:28.832705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:63040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.381 [2024-07-22 18:31:28.832730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-22 18:31:28.832752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:62536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.381 [2024-07-22 18:31:28.832774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-22 18:31:28.832796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:62544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.381 [2024-07-22 18:31:28.832818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-22 18:31:28.832841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:62552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.381 [2024-07-22 18:31:28.832863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-22 18:31:28.832885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:62560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.381 [2024-07-22 18:31:28.832907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-22 18:31:28.832929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:62568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.381 [2024-07-22 18:31:28.832951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-22 18:31:28.832973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:62576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.381 [2024-07-22 18:31:28.833015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-22 18:31:28.833039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:62584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.381 [2024-07-22 18:31:28.833062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-22 18:31:28.833084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:62592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.381 [2024-07-22 18:31:28.833109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-22 18:31:28.833140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:62600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.381 [2024-07-22 18:31:28.833163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-22 18:31:28.833186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:62608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.381 [2024-07-22 18:31:28.833227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-22 18:31:28.833253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:62616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.381 [2024-07-22 18:31:28.833275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-22 18:31:28.833298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:62624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.381 [2024-07-22 18:31:28.833319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-22 18:31:28.833341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:62632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.381 [2024-07-22 18:31:28.833363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-22 18:31:28.833385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:62640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.381 [2024-07-22 18:31:28.833407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-22 18:31:28.833429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:62648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.381 [2024-07-22 18:31:28.833450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-22 18:31:28.833472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:62656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.381 [2024-07-22 18:31:28.833497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-22 18:31:28.833519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:63048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.381 [2024-07-22 18:31:28.833541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-22 18:31:28.833563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:63056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.382 [2024-07-22 18:31:28.833587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-22 18:31:28.833610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:63064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.382 [2024-07-22 18:31:28.833632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-22 18:31:28.833654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:63072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.382 [2024-07-22 18:31:28.833676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-22 18:31:28.833699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:63080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.382 [2024-07-22 18:31:28.833729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-22 18:31:28.833753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:63088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.382 [2024-07-22 18:31:28.833775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-22 18:31:28.833797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:63096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.382 [2024-07-22 18:31:28.833819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-22 18:31:28.833841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:63104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.382 [2024-07-22 18:31:28.833882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-22 18:31:28.833906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:62664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-22 18:31:28.833929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-22 18:31:28.833960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:62672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-22 18:31:28.833983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-22 18:31:28.834007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:62680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-22 18:31:28.834029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-22 18:31:28.834051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:62688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-22 18:31:28.834074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-22 18:31:28.834096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:62696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-22 18:31:28.834118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-22 18:31:28.834140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:62704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-22 18:31:28.834162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-22 18:31:28.834184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:62712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-22 18:31:28.834221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-22 18:31:28.834248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:62720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-22 18:31:28.834274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-22 18:31:28.834297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:62728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-22 18:31:28.834319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-22 18:31:28.834349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:62736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-22 18:31:28.834373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-22 18:31:28.834397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:62744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-22 18:31:28.834423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-22 18:31:28.834445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:62752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-22 18:31:28.834468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-22 18:31:28.834491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:62760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-22 18:31:28.834513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-22 18:31:28.834535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:62768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-22 18:31:28.834558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-22 18:31:28.834580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:62776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-22 18:31:28.834602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-22 18:31:28.834625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:62784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-22 18:31:28.834649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-22 18:31:28.834672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:62792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-22 18:31:28.834694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-22 18:31:28.834722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:62800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-22 18:31:28.834745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-22 18:31:28.834768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:62808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-22 18:31:28.834793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-22 18:31:28.834815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:62816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-22 18:31:28.834838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-22 18:31:28.834860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:62824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-22 18:31:28.834882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-22 18:31:28.834904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:62832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-22 18:31:28.834927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-22 18:31:28.834956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:62840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.383 [2024-07-22 18:31:28.834979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.383 [2024-07-22 18:31:28.835003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:62848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.383 [2024-07-22 18:31:28.835027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.383 [2024-07-22 18:31:28.835050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:63112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.383 [2024-07-22 18:31:28.835072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.383 [2024-07-22 18:31:28.835094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:63120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.383 [2024-07-22 18:31:28.835116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.383 [2024-07-22 18:31:28.835138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:63128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.383 [2024-07-22 18:31:28.835161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.383 [2024-07-22 18:31:28.835183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.383 [2024-07-22 18:31:28.835218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.383 [2024-07-22 18:31:28.835245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:63144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.383 [2024-07-22 18:31:28.835269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.383 [2024-07-22 18:31:28.835291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:63152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.383 [2024-07-22 18:31:28.835314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.383 [2024-07-22 18:31:28.835336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:63160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.383 [2024-07-22 18:31:28.835358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.383 [2024-07-22 18:31:28.835380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:63168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.383 [2024-07-22 18:31:28.835407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.383 [2024-07-22 18:31:28.835429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:63176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.383 [2024-07-22 18:31:28.835452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.383 [2024-07-22 18:31:28.835479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:63184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.383 [2024-07-22 18:31:28.835502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.383 [2024-07-22 18:31:28.835524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:63192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.383 [2024-07-22 18:31:28.835554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.383 [2024-07-22 18:31:28.835577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:63200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.383 [2024-07-22 18:31:28.835600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.383 [2024-07-22 18:31:28.835622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:63208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.383 [2024-07-22 18:31:28.835644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.383 [2024-07-22 18:31:28.835666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:63216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.383 [2024-07-22 18:31:28.835688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.383 [2024-07-22 18:31:28.835711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.383 [2024-07-22 18:31:28.835733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.383 [2024-07-22 18:31:28.835756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:63232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.383 [2024-07-22 18:31:28.835782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.383 [2024-07-22 18:31:28.835804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:62856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.383 [2024-07-22 18:31:28.835826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.383 [2024-07-22 18:31:28.835849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:62864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.383 [2024-07-22 18:31:28.835871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.383 [2024-07-22 18:31:28.835893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:62872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.383 [2024-07-22 18:31:28.835915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.383 [2024-07-22 18:31:28.835938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:62880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.383 [2024-07-22 18:31:28.835963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.383 [2024-07-22 18:31:28.835986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:62888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.383 [2024-07-22 18:31:28.836008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.383 [2024-07-22 18:31:28.836031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:62896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.383 [2024-07-22 18:31:28.836070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.383 [2024-07-22 18:31:28.836093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.383 [2024-07-22 18:31:28.836116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.383 [2024-07-22 18:31:28.836147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:62912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.383 [2024-07-22 18:31:28.836173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.383 [2024-07-22 18:31:28.836196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:63240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.383 [2024-07-22 18:31:28.836232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.383 [2024-07-22 18:31:28.836263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:63248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.383 [2024-07-22 18:31:28.836286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.383 [2024-07-22 18:31:28.836309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.383 [2024-07-22 18:31:28.836331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.383 [2024-07-22 18:31:28.836353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:63264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.384 [2024-07-22 18:31:28.836375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-22 18:31:28.836398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:63272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.384 [2024-07-22 18:31:28.836420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-22 18:31:28.836442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:63280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.384 [2024-07-22 18:31:28.836464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-22 18:31:28.836486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:63288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.384 [2024-07-22 18:31:28.836509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-22 18:31:28.836531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:63296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.384 [2024-07-22 18:31:28.836558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-22 18:31:28.836580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:63304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.384 [2024-07-22 18:31:28.836603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-22 18:31:28.836625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:63312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.384 [2024-07-22 18:31:28.836647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-22 18:31:28.836669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:63320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.384 [2024-07-22 18:31:28.836692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-22 18:31:28.836714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:63328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.384 [2024-07-22 18:31:28.836745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-22 18:31:28.836769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:63336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.384 [2024-07-22 18:31:28.836792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-22 18:31:28.836814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:63344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.384 [2024-07-22 18:31:28.836836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-22 18:31:28.836859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:63352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.384 [2024-07-22 18:31:28.836882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-22 18:31:28.836905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:63360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.384 [2024-07-22 18:31:28.836930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-22 18:31:28.836953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:63368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.384 [2024-07-22 18:31:28.836973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-22 18:31:28.836999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:63376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.384 [2024-07-22 18:31:28.837019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-22 18:31:28.837041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:63384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.384 [2024-07-22 18:31:28.837060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-22 18:31:28.837081] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b780 is same with the state(5) to be set 00:24:32.384 [2024-07-22 18:31:28.837110] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:32.384 [2024-07-22 18:31:28.837127] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:32.384 [2024-07-22 18:31:28.837144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63392 len:8 PRP1 0x0 PRP2 0x0 00:24:32.384 [2024-07-22 18:31:28.837164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-22 18:31:28.837477] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b780 was disconnected and freed. reset controller. 00:24:32.384 [2024-07-22 18:31:28.837508] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:32.384 [2024-07-22 18:31:28.837530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:32.384 [2024-07-22 18:31:28.842392] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:32.384 [2024-07-22 18:31:28.842487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:24:32.384 [2024-07-22 18:31:28.888491] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:32.384 [2024-07-22 18:31:32.474124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:11456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.384 [2024-07-22 18:31:32.474255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-22 18:31:32.474315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.384 [2024-07-22 18:31:32.474345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-22 18:31:32.474370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:11472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.384 [2024-07-22 18:31:32.474391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-22 18:31:32.474413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:11480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.384 [2024-07-22 18:31:32.474432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-22 18:31:32.474454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.384 [2024-07-22 18:31:32.474473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-22 18:31:32.474496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:11496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.384 [2024-07-22 18:31:32.474515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-22 18:31:32.474536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.384 [2024-07-22 18:31:32.474555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-22 18:31:32.474576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:11512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.384 [2024-07-22 18:31:32.474595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-22 18:31:32.474617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.384 [2024-07-22 18:31:32.474636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.385 [2024-07-22 18:31:32.474657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:10952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.385 [2024-07-22 18:31:32.474676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.385 [2024-07-22 18:31:32.474698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.385 [2024-07-22 18:31:32.474716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.385 [2024-07-22 18:31:32.474738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.385 [2024-07-22 18:31:32.474757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.385 [2024-07-22 18:31:32.474778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:10976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.385 [2024-07-22 18:31:32.474797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.385 [2024-07-22 18:31:32.474818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:10984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.385 [2024-07-22 18:31:32.474853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.385 [2024-07-22 18:31:32.474876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:10992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.385 [2024-07-22 18:31:32.474896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.385 [2024-07-22 18:31:32.474917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:11000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.385 [2024-07-22 18:31:32.474936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.385 [2024-07-22 18:31:32.474957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:11008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.385 [2024-07-22 18:31:32.474975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.385 [2024-07-22 18:31:32.474998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.385 [2024-07-22 18:31:32.475017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.385 [2024-07-22 18:31:32.475038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.385 [2024-07-22 18:31:32.475057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.385 [2024-07-22 18:31:32.475078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:11032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.385 [2024-07-22 18:31:32.475096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.385 [2024-07-22 18:31:32.475117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.385 [2024-07-22 18:31:32.475137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.385 [2024-07-22 18:31:32.475158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:11048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.385 [2024-07-22 18:31:32.475176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.385 [2024-07-22 18:31:32.475197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:11056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.385 [2024-07-22 18:31:32.475231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.385 [2024-07-22 18:31:32.475255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:11064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.385 [2024-07-22 18:31:32.475274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.385 [2024-07-22 18:31:32.475296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:11072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.385 [2024-07-22 18:31:32.475315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.385 [2024-07-22 18:31:32.475337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.385 [2024-07-22 18:31:32.475356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.385 [2024-07-22 18:31:32.475386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.385 [2024-07-22 18:31:32.475406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.385 [2024-07-22 18:31:32.475429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.385 [2024-07-22 18:31:32.475449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.385 [2024-07-22 18:31:32.475470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:11104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.385 [2024-07-22 18:31:32.475490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.385 [2024-07-22 18:31:32.475511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.385 [2024-07-22 18:31:32.475530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.385 [2024-07-22 18:31:32.475551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:11120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.385 [2024-07-22 18:31:32.475571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.385 [2024-07-22 18:31:32.475592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:11128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.385 [2024-07-22 18:31:32.475611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.385 [2024-07-22 18:31:32.475632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:11520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.385 [2024-07-22 18:31:32.475651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.385 [2024-07-22 18:31:32.475672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:11528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.385 [2024-07-22 18:31:32.475691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.385 [2024-07-22 18:31:32.475732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:11536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.387 [2024-07-22 18:31:32.475751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-22 18:31:32.475773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:11544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.387 [2024-07-22 18:31:32.475799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-22 18:31:32.475820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:11552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.387 [2024-07-22 18:31:32.475840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-22 18:31:32.475861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:11560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.387 [2024-07-22 18:31:32.475880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-22 18:31:32.475902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:11568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.387 [2024-07-22 18:31:32.475929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-22 18:31:32.475952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.387 [2024-07-22 18:31:32.475971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-22 18:31:32.475995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.387 [2024-07-22 18:31:32.476014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-22 18:31:32.476036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.387 [2024-07-22 18:31:32.476055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-22 18:31:32.476077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:11152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.387 [2024-07-22 18:31:32.476096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-22 18:31:32.476118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.387 [2024-07-22 18:31:32.476137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-22 18:31:32.476158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:11168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.387 [2024-07-22 18:31:32.476177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-22 18:31:32.476200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.387 [2024-07-22 18:31:32.476233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-22 18:31:32.476256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:11184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.387 [2024-07-22 18:31:32.476276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-22 18:31:32.476297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:11192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.387 [2024-07-22 18:31:32.476316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-22 18:31:32.476337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.387 [2024-07-22 18:31:32.476356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-22 18:31:32.476377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.387 [2024-07-22 18:31:32.476396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-22 18:31:32.476417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:11216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.387 [2024-07-22 18:31:32.476436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-22 18:31:32.476465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.387 [2024-07-22 18:31:32.476485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-22 18:31:32.476506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.387 [2024-07-22 18:31:32.476525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-22 18:31:32.476546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:11240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.387 [2024-07-22 18:31:32.476565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-22 18:31:32.476586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.387 [2024-07-22 18:31:32.476605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-22 18:31:32.476626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:11256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.387 [2024-07-22 18:31:32.476644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-22 18:31:32.476666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:11584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.387 [2024-07-22 18:31:32.476685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-22 18:31:32.476706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:11592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.388 [2024-07-22 18:31:32.476725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-22 18:31:32.476747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:11600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.388 [2024-07-22 18:31:32.476766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-22 18:31:32.476788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:11608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.388 [2024-07-22 18:31:32.476807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-22 18:31:32.476828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.388 [2024-07-22 18:31:32.476847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-22 18:31:32.476868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.388 [2024-07-22 18:31:32.476887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-22 18:31:32.476908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:11632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.388 [2024-07-22 18:31:32.476926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-22 18:31:32.476947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:11640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.388 [2024-07-22 18:31:32.476973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-22 18:31:32.476995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.388 [2024-07-22 18:31:32.477014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-22 18:31:32.477035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:11656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.388 [2024-07-22 18:31:32.477054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-22 18:31:32.477075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:11664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.388 [2024-07-22 18:31:32.477094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-22 18:31:32.477116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.388 [2024-07-22 18:31:32.477135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-22 18:31:32.477156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.388 [2024-07-22 18:31:32.477174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-22 18:31:32.477195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.388 [2024-07-22 18:31:32.477230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-22 18:31:32.477253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:11696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.388 [2024-07-22 18:31:32.477272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-22 18:31:32.477293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.388 [2024-07-22 18:31:32.477312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-22 18:31:32.477333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.388 [2024-07-22 18:31:32.477352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-22 18:31:32.477373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:11720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.388 [2024-07-22 18:31:32.477391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-22 18:31:32.477413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.388 [2024-07-22 18:31:32.477432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-22 18:31:32.477454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:11736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.388 [2024-07-22 18:31:32.477474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-22 18:31:32.477495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:11264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.388 [2024-07-22 18:31:32.477521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-22 18:31:32.477543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:11272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.388 [2024-07-22 18:31:32.477562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-22 18:31:32.477584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:11280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.388 [2024-07-22 18:31:32.477603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-22 18:31:32.477624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.388 [2024-07-22 18:31:32.477643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-22 18:31:32.477664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:11296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.388 [2024-07-22 18:31:32.477683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-22 18:31:32.477704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.388 [2024-07-22 18:31:32.477723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-22 18:31:32.477745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:11312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.388 [2024-07-22 18:31:32.477763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-22 18:31:32.477785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.388 [2024-07-22 18:31:32.477804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-22 18:31:32.477826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.388 [2024-07-22 18:31:32.477845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-22 18:31:32.477885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.388 [2024-07-22 18:31:32.477906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-22 18:31:32.477927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:11760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.388 [2024-07-22 18:31:32.477947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-22 18:31:32.477968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.389 [2024-07-22 18:31:32.477988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-22 18:31:32.478009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.389 [2024-07-22 18:31:32.478029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-22 18:31:32.478059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.389 [2024-07-22 18:31:32.478079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-22 18:31:32.478101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.389 [2024-07-22 18:31:32.478120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-22 18:31:32.478151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:11800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.389 [2024-07-22 18:31:32.478171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-22 18:31:32.478192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.389 [2024-07-22 18:31:32.478223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-22 18:31:32.478247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:11816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.389 [2024-07-22 18:31:32.478267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-22 18:31:32.478288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:11824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.389 [2024-07-22 18:31:32.478307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-22 18:31:32.478328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.389 [2024-07-22 18:31:32.478346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-22 18:31:32.478367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:11840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.389 [2024-07-22 18:31:32.478386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-22 18:31:32.478407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.389 [2024-07-22 18:31:32.478426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-22 18:31:32.478464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.389 [2024-07-22 18:31:32.478484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-22 18:31:32.478505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.389 [2024-07-22 18:31:32.478524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-22 18:31:32.478545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:11872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.389 [2024-07-22 18:31:32.478564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-22 18:31:32.478585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.389 [2024-07-22 18:31:32.478613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-22 18:31:32.478636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:11328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.389 [2024-07-22 18:31:32.478655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-22 18:31:32.478676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:11336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.389 [2024-07-22 18:31:32.478694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-22 18:31:32.478716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:11344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.389 [2024-07-22 18:31:32.478735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-22 18:31:32.478756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.389 [2024-07-22 18:31:32.478775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-22 18:31:32.478797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:11360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.389 [2024-07-22 18:31:32.478816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-22 18:31:32.478844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.389 [2024-07-22 18:31:32.478863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-22 18:31:32.478884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.389 [2024-07-22 18:31:32.478903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-22 18:31:32.478925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:11384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.389 [2024-07-22 18:31:32.478944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-22 18:31:32.478965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:11392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.389 [2024-07-22 18:31:32.478984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-22 18:31:32.479005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.389 [2024-07-22 18:31:32.479024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-22 18:31:32.479045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:11408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.389 [2024-07-22 18:31:32.479065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-22 18:31:32.479086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:11416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.389 [2024-07-22 18:31:32.479105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-22 18:31:32.479134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:11424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.389 [2024-07-22 18:31:32.479154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-22 18:31:32.479177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.389 [2024-07-22 18:31:32.479195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-22 18:31:32.479232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:11440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.390 [2024-07-22 18:31:32.479252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.390 [2024-07-22 18:31:32.479273] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ba00 is same with the state(5) to be set 00:24:32.390 [2024-07-22 18:31:32.479301] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:32.390 [2024-07-22 18:31:32.479317] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:32.390 [2024-07-22 18:31:32.479333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11448 len:8 PRP1 0x0 PRP2 0x0 00:24:32.390 [2024-07-22 18:31:32.479352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.390 [2024-07-22 18:31:32.479372] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:32.390 [2024-07-22 18:31:32.479387] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:32.390 [2024-07-22 18:31:32.479402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11888 len:8 PRP1 0x0 PRP2 0x0 00:24:32.390 [2024-07-22 18:31:32.479420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.390 [2024-07-22 18:31:32.479439] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:32.390 [2024-07-22 18:31:32.479454] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:32.390 [2024-07-22 18:31:32.479468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11896 len:8 PRP1 0x0 PRP2 0x0 00:24:32.390 [2024-07-22 18:31:32.479487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.390 [2024-07-22 18:31:32.479505] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:32.390 [2024-07-22 18:31:32.479519] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:32.390 [2024-07-22 18:31:32.479534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:8 PRP1 0x0 PRP2 0x0 00:24:32.390 [2024-07-22 18:31:32.479551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.390 [2024-07-22 18:31:32.479569] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:32.390 [2024-07-22 18:31:32.479583] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:32.390 [2024-07-22 18:31:32.479598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11912 len:8 PRP1 0x0 PRP2 0x0 00:24:32.390 [2024-07-22 18:31:32.479615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.390 [2024-07-22 18:31:32.479632] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:32.390 [2024-07-22 18:31:32.479646] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:32.390 [2024-07-22 18:31:32.479661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11920 len:8 PRP1 0x0 PRP2 0x0 00:24:32.390 [2024-07-22 18:31:32.479687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.390 [2024-07-22 18:31:32.479705] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:32.390 [2024-07-22 18:31:32.479720] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:32.390 [2024-07-22 18:31:32.479735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11928 len:8 PRP1 0x0 PRP2 0x0 00:24:32.390 [2024-07-22 18:31:32.479753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.390 [2024-07-22 18:31:32.479771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:32.390 [2024-07-22 18:31:32.479785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:32.390 [2024-07-22 18:31:32.479799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:8 PRP1 0x0 PRP2 0x0 00:24:32.390 [2024-07-22 18:31:32.479817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.390 [2024-07-22 18:31:32.479834] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:32.390 [2024-07-22 18:31:32.479849] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:32.390 [2024-07-22 18:31:32.479863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11944 len:8 PRP1 0x0 PRP2 0x0 00:24:32.390 [2024-07-22 18:31:32.479881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.390 [2024-07-22 18:31:32.479899] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:32.390 [2024-07-22 18:31:32.479913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:32.390 [2024-07-22 18:31:32.479928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11952 len:8 PRP1 0x0 PRP2 0x0 00:24:32.390 [2024-07-22 18:31:32.479945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.390 [2024-07-22 18:31:32.479963] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:32.390 [2024-07-22 18:31:32.479978] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:32.390 [2024-07-22 18:31:32.479992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11960 len:8 PRP1 0x0 PRP2 0x0 00:24:32.390 [2024-07-22 18:31:32.480011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.390 [2024-07-22 18:31:32.480312] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002ba00 was disconnected and freed. reset controller. 00:24:32.390 [2024-07-22 18:31:32.480343] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:24:32.390 [2024-07-22 18:31:32.480450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.390 [2024-07-22 18:31:32.480480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.390 [2024-07-22 18:31:32.480502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.390 [2024-07-22 18:31:32.480520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.390 [2024-07-22 18:31:32.480540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.390 [2024-07-22 18:31:32.480558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.390 [2024-07-22 18:31:32.480587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.390 [2024-07-22 18:31:32.480607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.390 [2024-07-22 18:31:32.480625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:32.390 [2024-07-22 18:31:32.480714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:24:32.390 [2024-07-22 18:31:32.485632] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:32.390 [2024-07-22 18:31:32.533448] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:32.390 [2024-07-22 18:31:37.093816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:16448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.390 [2024-07-22 18:31:37.093922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.390 [2024-07-22 18:31:37.093967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.390 [2024-07-22 18:31:37.093990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.390 [2024-07-22 18:31:37.094014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.390 [2024-07-22 18:31:37.094034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-22 18:31:37.094055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.391 [2024-07-22 18:31:37.094074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-22 18:31:37.094095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.391 [2024-07-22 18:31:37.094114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-22 18:31:37.094136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:16488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.391 [2024-07-22 18:31:37.094155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-22 18:31:37.094176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:16496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.391 [2024-07-22 18:31:37.094195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-22 18:31:37.094243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:16504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.391 [2024-07-22 18:31:37.094265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-22 18:31:37.094288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.391 [2024-07-22 18:31:37.094307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-22 18:31:37.094328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.391 [2024-07-22 18:31:37.094347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-22 18:31:37.094399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.391 [2024-07-22 18:31:37.094420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-22 18:31:37.094441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:16536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.391 [2024-07-22 18:31:37.094459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-22 18:31:37.094480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.391 [2024-07-22 18:31:37.094499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-22 18:31:37.094519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.391 [2024-07-22 18:31:37.094538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-22 18:31:37.094559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.391 [2024-07-22 18:31:37.094577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-22 18:31:37.094598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.391 [2024-07-22 18:31:37.094637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-22 18:31:37.094661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.391 [2024-07-22 18:31:37.094680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-22 18:31:37.094702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.391 [2024-07-22 18:31:37.094720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-22 18:31:37.094742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:15896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.391 [2024-07-22 18:31:37.094760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-22 18:31:37.094782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.391 [2024-07-22 18:31:37.094801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-22 18:31:37.094821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.391 [2024-07-22 18:31:37.094840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-22 18:31:37.094861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.391 [2024-07-22 18:31:37.094881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-22 18:31:37.094902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.391 [2024-07-22 18:31:37.094930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-22 18:31:37.094953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:15936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.391 [2024-07-22 18:31:37.094973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-22 18:31:37.094994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:15944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.391 [2024-07-22 18:31:37.095013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-22 18:31:37.095034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.391 [2024-07-22 18:31:37.095052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-22 18:31:37.095074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:15960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.391 [2024-07-22 18:31:37.095093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-22 18:31:37.095114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.391 [2024-07-22 18:31:37.095133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-22 18:31:37.095154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.391 [2024-07-22 18:31:37.095173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-22 18:31:37.095194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.391 [2024-07-22 18:31:37.095226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-22 18:31:37.095264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.391 [2024-07-22 18:31:37.095285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-22 18:31:37.095306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.391 [2024-07-22 18:31:37.095325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-22 18:31:37.095347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:16008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.392 [2024-07-22 18:31:37.095367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-22 18:31:37.095389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:16016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.392 [2024-07-22 18:31:37.095408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-22 18:31:37.095429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.392 [2024-07-22 18:31:37.095448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-22 18:31:37.095469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:16032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.392 [2024-07-22 18:31:37.095501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-22 18:31:37.095523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.392 [2024-07-22 18:31:37.095542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-22 18:31:37.095564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.392 [2024-07-22 18:31:37.095582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-22 18:31:37.095604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.392 [2024-07-22 18:31:37.095622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-22 18:31:37.095644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:16568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.392 [2024-07-22 18:31:37.095663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-22 18:31:37.095683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.392 [2024-07-22 18:31:37.095703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-22 18:31:37.095724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.392 [2024-07-22 18:31:37.095743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-22 18:31:37.095765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.392 [2024-07-22 18:31:37.095783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-22 18:31:37.095804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.392 [2024-07-22 18:31:37.095823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-22 18:31:37.095844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.392 [2024-07-22 18:31:37.095863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-22 18:31:37.095884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.392 [2024-07-22 18:31:37.095902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-22 18:31:37.095924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:16624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.392 [2024-07-22 18:31:37.095943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-22 18:31:37.095964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.392 [2024-07-22 18:31:37.095984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-22 18:31:37.096012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.392 [2024-07-22 18:31:37.096032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-22 18:31:37.096062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:16072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.392 [2024-07-22 18:31:37.096082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-22 18:31:37.096103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:16080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.392 [2024-07-22 18:31:37.096122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-22 18:31:37.096143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.392 [2024-07-22 18:31:37.096162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-22 18:31:37.096182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.392 [2024-07-22 18:31:37.096201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-22 18:31:37.096242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:16104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.392 [2024-07-22 18:31:37.096262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-22 18:31:37.096283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.392 [2024-07-22 18:31:37.096302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-22 18:31:37.096323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.392 [2024-07-22 18:31:37.096342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-22 18:31:37.096364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.392 [2024-07-22 18:31:37.096383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-22 18:31:37.096404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.392 [2024-07-22 18:31:37.096422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-22 18:31:37.096443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.392 [2024-07-22 18:31:37.096462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-22 18:31:37.096483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.392 [2024-07-22 18:31:37.096502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-22 18:31:37.096524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.392 [2024-07-22 18:31:37.096551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-22 18:31:37.096573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.393 [2024-07-22 18:31:37.096593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-22 18:31:37.096614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:16176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.393 [2024-07-22 18:31:37.096633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-22 18:31:37.096654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:16184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.393 [2024-07-22 18:31:37.096673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-22 18:31:37.096694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:16640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.393 [2024-07-22 18:31:37.096713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-22 18:31:37.096740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.393 [2024-07-22 18:31:37.096759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-22 18:31:37.096780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.393 [2024-07-22 18:31:37.096799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-22 18:31:37.096820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.393 [2024-07-22 18:31:37.096839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-22 18:31:37.096859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.393 [2024-07-22 18:31:37.096878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-22 18:31:37.096899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.393 [2024-07-22 18:31:37.096918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-22 18:31:37.096939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.393 [2024-07-22 18:31:37.096957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-22 18:31:37.096979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:16696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.393 [2024-07-22 18:31:37.096998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-22 18:31:37.097020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.393 [2024-07-22 18:31:37.097040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-22 18:31:37.097067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.393 [2024-07-22 18:31:37.097087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-22 18:31:37.097108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.393 [2024-07-22 18:31:37.097127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-22 18:31:37.097148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:16728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.393 [2024-07-22 18:31:37.097167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-22 18:31:37.097188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.393 [2024-07-22 18:31:37.097217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-22 18:31:37.097252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.393 [2024-07-22 18:31:37.097272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-22 18:31:37.097293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.393 [2024-07-22 18:31:37.097311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-22 18:31:37.097332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.393 [2024-07-22 18:31:37.097366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-22 18:31:37.097388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:16192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.393 [2024-07-22 18:31:37.097407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-22 18:31:37.097433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.393 [2024-07-22 18:31:37.097452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-22 18:31:37.097474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:16208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.393 [2024-07-22 18:31:37.097493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-22 18:31:37.097515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:16216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.393 [2024-07-22 18:31:37.097534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-22 18:31:37.097554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:16224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.394 [2024-07-22 18:31:37.097573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.394 [2024-07-22 18:31:37.097594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:16232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.394 [2024-07-22 18:31:37.097613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.394 [2024-07-22 18:31:37.097642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:16240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.394 [2024-07-22 18:31:37.097662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.394 [2024-07-22 18:31:37.097683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:16248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.394 [2024-07-22 18:31:37.097701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.394 [2024-07-22 18:31:37.097722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.394 [2024-07-22 18:31:37.097741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.394 [2024-07-22 18:31:37.097762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.394 [2024-07-22 18:31:37.097781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.394 [2024-07-22 18:31:37.097802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:16784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.394 [2024-07-22 18:31:37.097821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.394 [2024-07-22 18:31:37.097842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:16792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.394 [2024-07-22 18:31:37.097872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.394 [2024-07-22 18:31:37.097895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.394 [2024-07-22 18:31:37.097914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.394 [2024-07-22 18:31:37.097940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.394 [2024-07-22 18:31:37.097960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.394 [2024-07-22 18:31:37.097980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:16816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.394 [2024-07-22 18:31:37.098000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.394 [2024-07-22 18:31:37.098021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:32.394 [2024-07-22 18:31:37.098040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.394 [2024-07-22 18:31:37.098061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:16256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.394 [2024-07-22 18:31:37.098080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.394 [2024-07-22 18:31:37.098106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.394 [2024-07-22 18:31:37.098126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.394 [2024-07-22 18:31:37.098148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:16272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.394 [2024-07-22 18:31:37.098174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.394 [2024-07-22 18:31:37.098196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:16280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.394 [2024-07-22 18:31:37.098228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.394 [2024-07-22 18:31:37.098250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.394 [2024-07-22 18:31:37.098269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.394 [2024-07-22 18:31:37.098291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:16296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.394 [2024-07-22 18:31:37.098309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.394 [2024-07-22 18:31:37.098331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.394 [2024-07-22 18:31:37.098349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.394 [2024-07-22 18:31:37.098370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:16312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.394 [2024-07-22 18:31:37.098389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.394 [2024-07-22 18:31:37.098410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.394 [2024-07-22 18:31:37.098429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.394 [2024-07-22 18:31:37.098450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:16328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.394 [2024-07-22 18:31:37.098468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.394 [2024-07-22 18:31:37.098489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:16336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.394 [2024-07-22 18:31:37.098508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.394 [2024-07-22 18:31:37.098528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.394 [2024-07-22 18:31:37.098547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.394 [2024-07-22 18:31:37.098567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.394 [2024-07-22 18:31:37.098587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.394 [2024-07-22 18:31:37.098613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:16360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.394 [2024-07-22 18:31:37.098632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.394 [2024-07-22 18:31:37.098653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.394 [2024-07-22 18:31:37.098672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.394 [2024-07-22 18:31:37.098700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.394 [2024-07-22 18:31:37.098720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.394 [2024-07-22 18:31:37.098741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.394 [2024-07-22 18:31:37.098760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.394 [2024-07-22 18:31:37.098786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.394 [2024-07-22 18:31:37.098806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.395 [2024-07-22 18:31:37.098827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:16400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.395 [2024-07-22 18:31:37.098845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.395 [2024-07-22 18:31:37.098867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.395 [2024-07-22 18:31:37.098885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.395 [2024-07-22 18:31:37.098907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.395 [2024-07-22 18:31:37.098925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.395 [2024-07-22 18:31:37.098946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.395 [2024-07-22 18:31:37.098964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.395 [2024-07-22 18:31:37.098986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.395 [2024-07-22 18:31:37.099005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.395 [2024-07-22 18:31:37.099025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002c180 is same with the state(5) to be set 00:24:32.395 [2024-07-22 18:31:37.099050] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:32.395 [2024-07-22 18:31:37.099065] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:32.395 [2024-07-22 18:31:37.099082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16440 len:8 PRP1 0x0 PRP2 0x0 00:24:32.395 [2024-07-22 18:31:37.099101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.395 [2024-07-22 18:31:37.099122] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:32.395 [2024-07-22 18:31:37.099136] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:32.395 [2024-07-22 18:31:37.099151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:8 PRP1 0x0 PRP2 0x0 00:24:32.395 [2024-07-22 18:31:37.099169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.395 [2024-07-22 18:31:37.099186] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:32.395 [2024-07-22 18:31:37.099200] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:32.395 [2024-07-22 18:31:37.099249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16840 len:8 PRP1 0x0 PRP2 0x0 00:24:32.395 [2024-07-22 18:31:37.099275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.395 [2024-07-22 18:31:37.099295] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:32.395 [2024-07-22 18:31:37.099318] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:32.395 [2024-07-22 18:31:37.099334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16848 len:8 PRP1 0x0 PRP2 0x0 00:24:32.395 [2024-07-22 18:31:37.099351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.395 [2024-07-22 18:31:37.099369] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:32.395 [2024-07-22 18:31:37.099382] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:32.395 [2024-07-22 18:31:37.099397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16856 len:8 PRP1 0x0 PRP2 0x0 00:24:32.395 [2024-07-22 18:31:37.099421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.395 [2024-07-22 18:31:37.099438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:32.395 [2024-07-22 18:31:37.099452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:32.395 [2024-07-22 18:31:37.099466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:8 PRP1 0x0 PRP2 0x0 00:24:32.395 [2024-07-22 18:31:37.099484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.395 [2024-07-22 18:31:37.099501] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:32.395 [2024-07-22 18:31:37.099515] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:32.395 [2024-07-22 18:31:37.099529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16872 len:8 PRP1 0x0 PRP2 0x0 00:24:32.395 [2024-07-22 18:31:37.099547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.395 [2024-07-22 18:31:37.099564] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:32.395 [2024-07-22 18:31:37.099578] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:32.395 [2024-07-22 18:31:37.099593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16880 len:8 PRP1 0x0 PRP2 0x0 00:24:32.395 [2024-07-22 18:31:37.099610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.395 [2024-07-22 18:31:37.099627] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:32.395 [2024-07-22 18:31:37.099641] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:32.395 [2024-07-22 18:31:37.099655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16888 len:8 PRP1 0x0 PRP2 0x0 00:24:32.395 [2024-07-22 18:31:37.099673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.395 [2024-07-22 18:31:37.099953] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002c180 was disconnected and freed. reset controller. 00:24:32.395 [2024-07-22 18:31:37.099981] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:24:32.395 [2024-07-22 18:31:37.100075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.395 [2024-07-22 18:31:37.100104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.395 [2024-07-22 18:31:37.100137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.395 [2024-07-22 18:31:37.100156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.395 [2024-07-22 18:31:37.100176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.395 [2024-07-22 18:31:37.100194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.395 [2024-07-22 18:31:37.100231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.395 [2024-07-22 18:31:37.100251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.395 [2024-07-22 18:31:37.100270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:32.395 [2024-07-22 18:31:37.100354] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:24:32.395 [2024-07-22 18:31:37.105170] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:32.395 [2024-07-22 18:31:37.139509] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:32.395 00:24:32.395 Latency(us) 00:24:32.395 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:32.395 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:32.396 Verification LBA range: start 0x0 length 0x4000 00:24:32.396 NVMe0n1 : 15.01 6902.79 26.96 232.68 0.00 17899.75 789.41 27525.12 00:24:32.396 =================================================================================================================== 00:24:32.396 Total : 6902.79 26.96 232.68 0.00 17899.75 789.41 27525.12 00:24:32.396 Received shutdown signal, test time was about 15.000000 seconds 00:24:32.396 00:24:32.396 Latency(us) 00:24:32.396 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:32.396 =================================================================================================================== 00:24:32.396 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:32.396 18:31:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:32.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:32.396 18:31:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:24:32.396 18:31:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:32.396 18:31:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=82931 00:24:32.396 18:31:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 82931 /var/tmp/bdevperf.sock 00:24:32.396 18:31:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 82931 ']' 00:24:32.396 18:31:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:32.396 18:31:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:32.396 18:31:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:32.396 18:31:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:32.396 18:31:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:32.396 18:31:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:33.358 18:31:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:33.358 18:31:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:24:33.358 18:31:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:33.616 [2024-07-22 18:31:45.484813] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:33.616 18:31:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:33.874 [2024-07-22 18:31:45.761135] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:33.874 18:31:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:34.132 NVMe0n1 00:24:34.132 18:31:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:34.698 00:24:34.698 18:31:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:34.955 00:24:34.955 18:31:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:34.955 18:31:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:24:35.213 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:35.471 18:31:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:38.753 18:31:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:38.753 18:31:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:38.753 18:31:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=83010 00:24:38.753 18:31:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 83010 00:24:38.753 18:31:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:40.125 0 00:24:40.125 18:31:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:40.125 [2024-07-22 18:31:44.342406] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:24:40.125 [2024-07-22 18:31:44.342750] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82931 ] 00:24:40.125 [2024-07-22 18:31:44.522591] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:40.125 [2024-07-22 18:31:44.813282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:40.125 [2024-07-22 18:31:45.042525] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:40.125 [2024-07-22 18:31:47.265630] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:40.125 [2024-07-22 18:31:47.265869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.125 [2024-07-22 18:31:47.265913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.125 [2024-07-22 18:31:47.265945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.125 [2024-07-22 18:31:47.265979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.125 [2024-07-22 18:31:47.266001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.125 [2024-07-22 18:31:47.266024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.125 [2024-07-22 18:31:47.266052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.125 [2024-07-22 18:31:47.266075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.125 [2024-07-22 18:31:47.266097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.125 [2024-07-22 18:31:47.266202] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.125 [2024-07-22 18:31:47.266279] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:24:40.125 [2024-07-22 18:31:47.271605] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:40.125 Running I/O for 1 seconds... 00:24:40.125 00:24:40.125 Latency(us) 00:24:40.125 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:40.125 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:40.125 Verification LBA range: start 0x0 length 0x4000 00:24:40.125 NVMe0n1 : 1.02 5157.02 20.14 0.00 0.00 24714.63 3589.59 22997.18 00:24:40.125 =================================================================================================================== 00:24:40.125 Total : 5157.02 20.14 0.00 0.00 24714.63 3589.59 22997.18 00:24:40.125 18:31:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:40.125 18:31:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:40.125 18:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:40.383 18:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:40.383 18:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:40.641 18:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:40.898 18:31:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:24:44.182 18:31:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:44.182 18:31:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:24:44.182 18:31:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 82931 00:24:44.182 18:31:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 82931 ']' 00:24:44.182 18:31:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 82931 00:24:44.182 18:31:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:24:44.182 18:31:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:44.182 18:31:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82931 00:24:44.182 killing process with pid 82931 00:24:44.182 18:31:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:44.182 18:31:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:44.182 18:31:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82931' 00:24:44.182 18:31:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 82931 00:24:44.182 18:31:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 82931 00:24:45.557 18:31:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:24:45.557 18:31:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:45.557 18:31:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:45.557 18:31:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:45.557 18:31:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:45.557 18:31:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:45.557 18:31:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:24:45.557 18:31:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:45.557 18:31:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:24:45.557 18:31:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:45.557 18:31:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:45.557 rmmod nvme_tcp 00:24:45.557 rmmod nvme_fabrics 00:24:45.816 rmmod nvme_keyring 00:24:45.816 18:31:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:45.816 18:31:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:24:45.816 18:31:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:24:45.816 18:31:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 82666 ']' 00:24:45.816 18:31:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 82666 00:24:45.816 18:31:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 82666 ']' 00:24:45.816 18:31:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 82666 00:24:45.816 18:31:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:24:45.816 18:31:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:45.816 18:31:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82666 00:24:45.816 killing process with pid 82666 00:24:45.816 18:31:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:45.816 18:31:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:45.816 18:31:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82666' 00:24:45.816 18:31:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 82666 00:24:45.816 18:31:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 82666 00:24:47.191 18:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:47.191 18:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:47.191 18:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:47.191 18:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:47.191 18:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:47.191 18:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:47.191 18:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:47.191 18:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:47.191 18:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:47.191 00:24:47.191 real 0m36.633s 00:24:47.191 user 2m19.904s 00:24:47.191 sys 0m5.748s 00:24:47.191 18:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:47.191 18:31:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:47.191 ************************************ 00:24:47.191 END TEST nvmf_failover 00:24:47.191 ************************************ 00:24:47.191 18:31:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:24:47.191 18:31:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:47.191 18:31:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:47.191 18:31:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:47.191 18:31:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.191 ************************************ 00:24:47.191 START TEST nvmf_host_discovery 00:24:47.191 ************************************ 00:24:47.191 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:47.191 * Looking for test storage... 00:24:47.191 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:47.191 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:47.191 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:24:47.191 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:47.191 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:47.191 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:47.191 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:47.191 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:47.191 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:47.191 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:47.191 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:47.191 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:47.191 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:47.191 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:24:47.191 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=1e224894-a0fc-4112-b81b-a37606f50c96 00:24:47.191 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:47.191 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:47.191 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:47.191 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:47.191 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:47.191 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:47.191 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:47.191 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:47.191 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.191 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.191 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.191 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:24:47.191 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.191 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:24:47.191 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:47.191 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:47.191 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:47.191 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:47.191 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:47.191 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:47.191 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:47.191 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:47.449 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:24:47.449 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:24:47.449 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:47.449 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:47.449 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:47.449 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:24:47.449 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:24:47.449 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:47.449 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:47.449 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:47.449 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:47.449 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:47.449 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:47.449 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:47.449 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:47.449 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:24:47.449 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:24:47.449 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:24:47.449 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:24:47.449 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:24:47.449 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:24:47.449 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:47.449 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:47.449 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:47.449 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:47.450 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:47.450 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:47.450 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:47.450 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:47.450 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:47.450 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:47.450 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:47.450 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:47.450 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:47.450 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:47.450 Cannot find device "nvmf_tgt_br" 00:24:47.450 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:24:47.450 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:47.450 Cannot find device "nvmf_tgt_br2" 00:24:47.450 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:24:47.450 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:47.450 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:47.450 Cannot find device "nvmf_tgt_br" 00:24:47.450 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:24:47.450 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:47.450 Cannot find device "nvmf_tgt_br2" 00:24:47.450 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:24:47.450 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:47.450 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:47.450 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:47.450 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:47.450 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:24:47.450 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:47.450 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:47.450 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:24:47.450 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:47.450 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:47.450 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:47.450 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:47.450 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:47.450 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:47.450 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:47.450 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:47.450 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:47.450 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:47.450 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:47.450 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:47.450 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:47.450 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:47.450 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:47.450 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:47.450 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:47.450 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:47.450 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:47.450 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:47.708 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:47.708 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:47.708 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:47.708 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:47.708 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:47.708 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:24:47.708 00:24:47.708 --- 10.0.0.2 ping statistics --- 00:24:47.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:47.708 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:24:47.708 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:47.708 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:47.708 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:24:47.708 00:24:47.708 --- 10.0.0.3 ping statistics --- 00:24:47.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:47.708 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:24:47.708 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:47.708 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:47.708 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.062 ms 00:24:47.708 00:24:47.708 --- 10.0.0.1 ping statistics --- 00:24:47.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:47.708 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:24:47.708 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:47.708 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:24:47.708 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:47.708 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:47.708 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:47.708 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:47.708 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:47.708 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:47.708 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:47.708 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:24:47.708 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:47.708 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:47.708 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:47.708 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=83297 00:24:47.708 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 83297 00:24:47.708 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:47.708 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 83297 ']' 00:24:47.708 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:47.708 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:47.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:47.708 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:47.708 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:47.708 18:31:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:47.708 [2024-07-22 18:31:59.648273] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:24:47.708 [2024-07-22 18:31:59.648491] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:47.966 [2024-07-22 18:31:59.828085] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:48.224 [2024-07-22 18:32:00.097727] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:48.224 [2024-07-22 18:32:00.097796] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:48.224 [2024-07-22 18:32:00.097813] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:48.224 [2024-07-22 18:32:00.097829] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:48.224 [2024-07-22 18:32:00.097841] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:48.224 [2024-07-22 18:32:00.097906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:48.482 [2024-07-22 18:32:00.308723] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:48.740 18:32:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:48.740 18:32:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:24:48.740 18:32:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:48.740 18:32:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:48.740 18:32:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:48.740 18:32:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:48.740 18:32:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:48.740 18:32:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.740 18:32:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:48.740 [2024-07-22 18:32:00.721708] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:48.740 18:32:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.740 18:32:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:24:48.740 18:32:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.740 18:32:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:48.740 [2024-07-22 18:32:00.729837] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:48.740 18:32:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.740 18:32:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:24:48.740 18:32:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.740 18:32:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:48.740 null0 00:24:48.740 18:32:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.740 18:32:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:24:48.740 18:32:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.740 18:32:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:48.740 null1 00:24:48.740 18:32:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.740 18:32:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:24:48.740 18:32:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.740 18:32:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:48.997 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:48.997 18:32:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.997 18:32:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=83328 00:24:48.997 18:32:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:24:48.997 18:32:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 83328 /tmp/host.sock 00:24:48.998 18:32:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 83328 ']' 00:24:48.998 18:32:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:24:48.998 18:32:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:48.998 18:32:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:48.998 18:32:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:48.998 18:32:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:48.998 [2024-07-22 18:32:00.871282] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:24:48.998 [2024-07-22 18:32:00.871450] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83328 ] 00:24:49.256 [2024-07-22 18:32:01.049330] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:49.514 [2024-07-22 18:32:01.338346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:49.771 [2024-07-22 18:32:01.544507] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:49.771 18:32:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:49.771 18:32:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:24:49.771 18:32:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:49.771 18:32:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:24:49.771 18:32:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.771 18:32:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:49.771 18:32:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.771 18:32:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:24:49.771 18:32:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.771 18:32:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.030 18:32:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.030 18:32:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:24:50.030 18:32:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:24:50.030 18:32:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:50.030 18:32:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:50.030 18:32:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.030 18:32:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.030 18:32:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:50.030 18:32:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:50.030 18:32:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.030 18:32:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:24:50.030 18:32:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:24:50.030 18:32:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:50.030 18:32:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.030 18:32:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.030 18:32:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:50.030 18:32:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:50.030 18:32:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:50.030 18:32:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.030 18:32:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:24:50.030 18:32:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:24:50.030 18:32:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.030 18:32:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.030 18:32:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.030 18:32:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:24:50.030 18:32:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:50.030 18:32:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:50.030 18:32:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.030 18:32:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:50.030 18:32:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.030 18:32:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:50.030 18:32:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.030 18:32:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:24:50.030 18:32:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:24:50.030 18:32:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:50.030 18:32:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:50.030 18:32:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.030 18:32:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.030 18:32:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:50.030 18:32:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:50.030 18:32:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.030 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:24:50.030 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:24:50.030 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.030 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.288 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.288 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:24:50.288 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:50.288 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:50.288 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:50.288 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.288 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:50.288 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.288 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.288 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:24:50.288 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:24:50.288 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:50.288 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.288 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.288 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:50.288 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:50.288 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:50.288 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.288 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:24:50.288 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:50.288 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.288 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.288 [2024-07-22 18:32:02.166563] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:50.288 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.289 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:24:50.289 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:50.289 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:50.289 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:50.289 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.289 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.289 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:50.289 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.289 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:24:50.289 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:24:50.289 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:50.289 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:50.289 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:50.289 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.289 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.289 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:50.289 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.289 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:24:50.289 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:24:50.289 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:50.289 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:50.289 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:50.289 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:50.289 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:50.289 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:50.289 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:24:50.289 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:50.289 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:50.289 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.289 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.289 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.547 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:50.547 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:24:50.547 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:24:50.547 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:50.547 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:24:50.547 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.547 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.547 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.547 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:50.547 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:50.547 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:50.547 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:50.547 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:50.547 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:24:50.547 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:50.547 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.547 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:50.547 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.547 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:50.547 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:50.547 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.547 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:24:50.547 18:32:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:24:50.824 [2024-07-22 18:32:02.796843] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:50.824 [2024-07-22 18:32:02.796902] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:50.824 [2024-07-22 18:32:02.796959] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:50.824 [2024-07-22 18:32:02.802931] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:51.082 [2024-07-22 18:32:02.869534] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:51.082 [2024-07-22 18:32:02.869602] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:51.648 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:51.648 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:51.648 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:24:51.648 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:51.648 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:51.648 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.648 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:51.648 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:51.648 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:51.648 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.648 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.648 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:51.648 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:51.648 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:51.648 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:51.648 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:51.648 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:24:51.648 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:24:51.648 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:51.648 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:51.648 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.648 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:51.648 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:51.649 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:51.649 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.649 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:24:51.649 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:51.649 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:51.649 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:51.649 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:51.649 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:51.649 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:24:51.649 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:24:51.649 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:51.649 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.649 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:51.649 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:51.649 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:51.649 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:51.649 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.649 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:24:51.649 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:51.649 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:24:51.649 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:51.649 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:51.649 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:51.649 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:51.649 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:51.649 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:51.649 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:24:51.649 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:51.649 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.649 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:51.649 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:51.649 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.649 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:51.649 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:24:51.649 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:24:51.649 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:51.649 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:24:51.649 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.649 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:51.649 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.649 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:51.649 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:51.649 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:51.649 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:51.649 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:51.649 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:24:51.649 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:51.649 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:51.649 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.649 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:51.649 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:51.649 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:51.907 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.907 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:51.907 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:51.907 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:24:51.907 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:51.907 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:51.907 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:51.907 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:51.907 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:51.907 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:51.907 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:24:51.907 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:24:51.907 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:51.907 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.907 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:51.907 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.907 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:51.907 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:51.907 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:24:51.907 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:51.907 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:24:51.907 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.907 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:51.907 [2024-07-22 18:32:03.749251] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:51.907 [2024-07-22 18:32:03.750021] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:51.907 [2024-07-22 18:32:03.750099] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:51.907 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.907 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:51.907 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:51.907 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:51.907 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:51.907 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:51.907 [2024-07-22 18:32:03.756024] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:24:51.907 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:24:51.907 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:51.907 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:51.907 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:51.907 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.907 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:51.907 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:51.907 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.907 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.907 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:51.908 [2024-07-22 18:32:03.819607] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:51.908 [2024-07-22 18:32:03.819658] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:51.908 [2024-07-22 18:32:03.819673] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:51.908 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:51.908 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:51.908 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:51.908 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:51.908 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:51.908 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:24:51.908 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:51.908 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:51.908 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.908 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:51.908 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:51.908 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:51.908 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.908 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:51.908 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:51.908 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:51.908 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:51.908 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:51.908 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:51.908 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:51.908 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:24:51.908 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:51.908 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.908 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:51.908 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:51.908 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:51.908 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:51.908 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.164 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:24:52.164 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:52.164 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:24:52.164 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:52.164 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:52.165 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:52.165 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:52.165 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:52.165 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:52.165 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:24:52.165 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:52.165 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:52.165 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.165 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.165 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.165 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:52.165 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:52.165 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:24:52.165 18:32:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:52.165 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:52.165 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.165 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.165 [2024-07-22 18:32:04.006676] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:52.165 [2024-07-22 18:32:04.006747] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:52.165 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.165 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:52.165 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:52.165 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:52.165 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:52.165 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:52.165 [2024-07-22 18:32:04.012647] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:24:52.165 [2024-07-22 18:32:04.012703] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:52.165 [2024-07-22 18:32:04.012865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:52.165 [2024-07-22 18:32:04.012905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.165 [2024-07-22 18:32:04.012931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:52.165 [2024-07-22 18:32:04.012946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.165 [2024-07-22 18:32:04.012961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:52.165 [2024-07-22 18:32:04.012975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.165 [2024-07-22 18:32:04.012989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:52.165 [2024-07-22 18:32:04.013003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.165 [2024-07-22 18:32:04.013016] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(5) to be set 00:24:52.165 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:24:52.165 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:52.165 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:52.165 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.165 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.165 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:52.165 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:52.165 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.165 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.165 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:52.165 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:52.165 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:52.165 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:52.165 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:52.165 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:52.165 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:24:52.165 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:52.165 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:52.165 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.165 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.165 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:52.165 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:52.165 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.165 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:52.165 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:52.165 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:52.165 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:52.165 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:52.165 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:52.165 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:24:52.165 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:24:52.165 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:52.165 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.165 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:52.165 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.165 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:52.165 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:52.165 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.423 18:32:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.798 [2024-07-22 18:32:05.447378] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:53.798 [2024-07-22 18:32:05.447431] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:53.798 [2024-07-22 18:32:05.447463] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:53.798 [2024-07-22 18:32:05.453453] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:24:53.798 [2024-07-22 18:32:05.523963] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:53.798 [2024-07-22 18:32:05.524038] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:53.798 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.798 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:53.798 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:24:53.798 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:53.798 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:53.798 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:53.798 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:53.798 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:53.798 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:53.798 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.798 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.798 request: 00:24:53.798 { 00:24:53.799 "name": "nvme", 00:24:53.799 "trtype": "tcp", 00:24:53.799 "traddr": "10.0.0.2", 00:24:53.799 "adrfam": "ipv4", 00:24:53.799 "trsvcid": "8009", 00:24:53.799 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:53.799 "wait_for_attach": true, 00:24:53.799 "method": "bdev_nvme_start_discovery", 00:24:53.799 "req_id": 1 00:24:53.799 } 00:24:53.799 Got JSON-RPC error response 00:24:53.799 response: 00:24:53.799 { 00:24:53.799 "code": -17, 00:24:53.799 "message": "File exists" 00:24:53.799 } 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.799 request: 00:24:53.799 { 00:24:53.799 "name": "nvme_second", 00:24:53.799 "trtype": "tcp", 00:24:53.799 "traddr": "10.0.0.2", 00:24:53.799 "adrfam": "ipv4", 00:24:53.799 "trsvcid": "8009", 00:24:53.799 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:53.799 "wait_for_attach": true, 00:24:53.799 "method": "bdev_nvme_start_discovery", 00:24:53.799 "req_id": 1 00:24:53.799 } 00:24:53.799 Got JSON-RPC error response 00:24:53.799 response: 00:24:53.799 { 00:24:53.799 "code": -17, 00:24:53.799 "message": "File exists" 00:24:53.799 } 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.799 18:32:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:55.174 [2024-07-22 18:32:06.788797] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.174 [2024-07-22 18:32:06.788882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002bc80 with addr=10.0.0.2, port=8010 00:24:55.174 [2024-07-22 18:32:06.788956] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:55.174 [2024-07-22 18:32:06.788975] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:55.174 [2024-07-22 18:32:06.788990] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:56.107 [2024-07-22 18:32:07.788849] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.107 [2024-07-22 18:32:07.788935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002bf00 with addr=10.0.0.2, port=8010 00:24:56.107 [2024-07-22 18:32:07.789002] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:56.107 [2024-07-22 18:32:07.789020] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:56.107 [2024-07-22 18:32:07.789035] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:57.041 [2024-07-22 18:32:08.788516] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:24:57.041 request: 00:24:57.041 { 00:24:57.041 "name": "nvme_second", 00:24:57.041 "trtype": "tcp", 00:24:57.041 "traddr": "10.0.0.2", 00:24:57.041 "adrfam": "ipv4", 00:24:57.041 "trsvcid": "8010", 00:24:57.041 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:57.041 "wait_for_attach": false, 00:24:57.041 "attach_timeout_ms": 3000, 00:24:57.041 "method": "bdev_nvme_start_discovery", 00:24:57.041 "req_id": 1 00:24:57.041 } 00:24:57.041 Got JSON-RPC error response 00:24:57.041 response: 00:24:57.041 { 00:24:57.041 "code": -110, 00:24:57.041 "message": "Connection timed out" 00:24:57.041 } 00:24:57.041 18:32:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:57.041 18:32:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:24:57.041 18:32:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:57.041 18:32:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:57.041 18:32:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:57.041 18:32:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:24:57.041 18:32:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:57.041 18:32:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:57.041 18:32:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.041 18:32:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:57.041 18:32:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:57.041 18:32:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:57.041 18:32:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.041 18:32:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:24:57.041 18:32:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:24:57.041 18:32:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 83328 00:24:57.041 18:32:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:24:57.041 18:32:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:57.041 18:32:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:24:57.041 18:32:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:57.041 18:32:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:24:57.041 18:32:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:57.041 18:32:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:57.041 rmmod nvme_tcp 00:24:57.041 rmmod nvme_fabrics 00:24:57.041 rmmod nvme_keyring 00:24:57.041 18:32:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:57.041 18:32:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:24:57.041 18:32:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:24:57.041 18:32:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 83297 ']' 00:24:57.041 18:32:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 83297 00:24:57.041 18:32:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 83297 ']' 00:24:57.041 18:32:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 83297 00:24:57.041 18:32:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:24:57.041 18:32:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:57.041 18:32:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83297 00:24:57.041 killing process with pid 83297 00:24:57.041 18:32:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:57.041 18:32:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:57.041 18:32:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83297' 00:24:57.041 18:32:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 83297 00:24:57.041 18:32:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 83297 00:24:58.417 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:58.417 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:58.417 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:58.417 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:58.417 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:58.417 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:58.417 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:58.417 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:58.417 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:58.417 00:24:58.417 real 0m11.079s 00:24:58.417 user 0m21.299s 00:24:58.417 sys 0m2.184s 00:24:58.417 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:58.417 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:58.417 ************************************ 00:24:58.417 END TEST nvmf_host_discovery 00:24:58.417 ************************************ 00:24:58.417 18:32:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:24:58.417 18:32:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:58.417 18:32:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:58.417 18:32:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:58.417 18:32:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.417 ************************************ 00:24:58.417 START TEST nvmf_host_multipath_status 00:24:58.417 ************************************ 00:24:58.417 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:58.417 * Looking for test storage... 00:24:58.417 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:58.417 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:58.417 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:24:58.417 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:58.417 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:58.417 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:58.417 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:58.417 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:58.417 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:58.417 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:58.417 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:58.417 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:58.417 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:58.417 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:24:58.417 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=1e224894-a0fc-4112-b81b-a37606f50c96 00:24:58.417 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:58.417 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:58.417 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:58.417 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:58.417 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:58.417 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:58.417 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:58.417 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:58.417 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.417 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.417 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.417 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:24:58.417 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.417 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:24:58.417 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:58.417 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:58.417 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:58.417 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:58.417 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:58.417 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:58.417 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:58.417 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:58.417 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:58.417 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:58.417 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:58.417 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:24:58.417 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:58.417 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:58.417 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:24:58.418 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:58.418 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:58.418 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:58.418 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:58.418 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:58.418 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:58.418 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:58.418 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:58.418 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:24:58.418 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:24:58.418 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:24:58.418 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:24:58.418 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:24:58.418 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:24:58.418 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:58.418 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:58.418 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:58.418 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:58.418 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:58.418 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:58.418 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:58.418 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:58.418 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:58.418 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:58.418 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:58.418 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:58.418 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:58.418 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:58.418 Cannot find device "nvmf_tgt_br" 00:24:58.418 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:24:58.418 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:58.418 Cannot find device "nvmf_tgt_br2" 00:24:58.418 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:24:58.418 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:58.418 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:58.418 Cannot find device "nvmf_tgt_br" 00:24:58.418 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:24:58.418 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:58.418 Cannot find device "nvmf_tgt_br2" 00:24:58.418 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:24:58.418 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:58.676 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:58.676 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:58.676 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:58.677 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:24:58.677 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:58.677 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:58.677 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:24:58.677 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:58.677 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:58.677 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:58.677 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:58.677 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:58.677 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:58.677 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:58.677 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:58.677 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:58.677 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:58.677 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:58.677 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:58.677 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:58.677 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:58.677 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:58.677 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:58.677 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:58.677 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:58.677 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:58.677 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:58.677 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:58.677 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:58.677 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:58.677 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:58.677 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:58.677 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:24:58.677 00:24:58.677 --- 10.0.0.2 ping statistics --- 00:24:58.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.677 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:24:58.677 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:58.985 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:58.985 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:24:58.985 00:24:58.985 --- 10.0.0.3 ping statistics --- 00:24:58.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.985 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:24:58.985 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:58.985 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:58.985 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:24:58.985 00:24:58.985 --- 10.0.0.1 ping statistics --- 00:24:58.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.985 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:24:58.985 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:58.985 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:24:58.985 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:58.985 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:58.985 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:58.985 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:58.985 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:58.985 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:58.985 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:58.985 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:24:58.985 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:58.985 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:58.985 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:58.985 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=83793 00:24:58.985 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 83793 00:24:58.985 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:58.985 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 83793 ']' 00:24:58.985 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:58.985 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:58.985 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:58.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:58.985 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:58.985 18:32:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:58.985 [2024-07-22 18:32:10.841217] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:24:58.985 [2024-07-22 18:32:10.841387] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:59.243 [2024-07-22 18:32:11.018533] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:59.501 [2024-07-22 18:32:11.303706] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:59.501 [2024-07-22 18:32:11.303791] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:59.501 [2024-07-22 18:32:11.303808] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:59.501 [2024-07-22 18:32:11.303823] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:59.501 [2024-07-22 18:32:11.303834] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:59.501 [2024-07-22 18:32:11.304062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:59.501 [2024-07-22 18:32:11.304507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:59.501 [2024-07-22 18:32:11.513701] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:59.761 18:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:59.761 18:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:24:59.761 18:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:59.761 18:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:59.761 18:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:59.761 18:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:59.761 18:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=83793 00:24:59.761 18:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:00.328 [2024-07-22 18:32:12.038981] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:00.328 18:32:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:00.596 Malloc0 00:25:00.596 18:32:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:00.870 18:32:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:01.128 18:32:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:01.128 [2024-07-22 18:32:13.144139] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:01.408 18:32:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:01.408 [2024-07-22 18:32:13.376286] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:01.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:01.408 18:32:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=83849 00:25:01.408 18:32:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:01.408 18:32:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:01.408 18:32:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 83849 /var/tmp/bdevperf.sock 00:25:01.408 18:32:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 83849 ']' 00:25:01.408 18:32:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:01.408 18:32:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:01.408 18:32:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:01.408 18:32:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:01.408 18:32:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:02.782 18:32:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:02.782 18:32:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:25:02.782 18:32:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:02.782 18:32:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:25:03.040 Nvme0n1 00:25:03.040 18:32:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:03.297 Nvme0n1 00:25:03.297 18:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:03.297 18:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:05.870 18:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:05.870 18:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:05.870 18:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:05.870 18:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:07.245 18:32:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:07.245 18:32:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:07.245 18:32:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:07.245 18:32:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:07.245 18:32:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:07.245 18:32:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:07.245 18:32:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:07.245 18:32:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:07.503 18:32:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:07.503 18:32:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:07.503 18:32:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:07.504 18:32:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:07.761 18:32:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:07.761 18:32:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:07.761 18:32:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:07.761 18:32:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:08.027 18:32:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:08.027 18:32:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:08.027 18:32:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:08.027 18:32:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:08.293 18:32:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:08.293 18:32:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:08.293 18:32:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:08.293 18:32:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:08.551 18:32:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:08.551 18:32:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:08.551 18:32:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:08.809 18:32:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:09.375 18:32:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:10.310 18:32:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:10.310 18:32:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:10.310 18:32:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:10.310 18:32:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:10.568 18:32:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:10.568 18:32:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:10.568 18:32:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:10.568 18:32:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:10.826 18:32:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:10.826 18:32:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:10.826 18:32:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:10.826 18:32:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:11.085 18:32:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:11.085 18:32:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:11.085 18:32:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:11.085 18:32:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:11.651 18:32:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:11.651 18:32:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:11.651 18:32:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:11.651 18:32:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:11.909 18:32:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:11.909 18:32:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:11.909 18:32:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:11.909 18:32:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:12.167 18:32:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:12.167 18:32:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:12.167 18:32:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:12.424 18:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:12.690 18:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:25:13.625 18:32:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:25:13.625 18:32:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:13.625 18:32:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.625 18:32:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:13.882 18:32:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:13.882 18:32:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:13.882 18:32:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.882 18:32:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:14.138 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:14.138 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:14.138 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:14.138 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:14.396 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:14.396 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:14.396 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:14.396 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:14.654 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:14.654 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:14.654 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:14.654 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:14.917 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:14.917 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:14.917 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:14.917 18:32:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:15.184 18:32:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:15.184 18:32:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:25:15.184 18:32:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:15.442 18:32:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:15.700 18:32:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:25:16.635 18:32:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:25:16.635 18:32:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:16.635 18:32:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:16.635 18:32:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:16.893 18:32:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:16.893 18:32:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:16.893 18:32:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:16.893 18:32:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:17.151 18:32:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:17.151 18:32:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:17.151 18:32:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:17.151 18:32:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:17.412 18:32:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:17.412 18:32:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:17.412 18:32:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:17.412 18:32:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:17.683 18:32:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:17.683 18:32:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:17.683 18:32:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:17.683 18:32:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:17.941 18:32:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:17.941 18:32:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:17.941 18:32:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:17.941 18:32:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:18.199 18:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:18.199 18:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:25:18.199 18:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:18.457 18:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:18.714 18:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:25:19.648 18:32:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:25:19.648 18:32:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:19.648 18:32:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:19.648 18:32:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:19.906 18:32:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:19.906 18:32:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:19.906 18:32:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:19.906 18:32:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:20.163 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:20.163 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:20.163 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:20.163 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:20.421 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:20.421 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:20.421 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:20.421 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:20.679 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:20.679 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:20.679 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:20.679 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:20.938 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:20.938 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:20.938 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:20.938 18:32:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:21.196 18:32:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:21.196 18:32:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:25:21.196 18:32:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:21.454 18:32:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:21.712 18:32:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:25:22.646 18:32:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:25:22.646 18:32:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:22.646 18:32:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:22.646 18:32:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:22.905 18:32:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:22.905 18:32:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:22.905 18:32:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:22.905 18:32:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:23.163 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:23.163 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:23.163 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:23.163 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:23.421 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:23.421 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:23.421 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:23.421 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:23.679 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:23.679 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:23.679 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:23.679 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:23.938 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:23.938 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:23.938 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:23.938 18:32:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:24.199 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:24.199 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:25:24.472 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:25:24.472 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:24.730 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:24.988 18:32:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:25:25.923 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:25:25.923 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:25.923 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:25.923 18:32:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:26.181 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:26.181 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:26.181 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:26.181 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:26.439 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:26.439 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:26.439 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:26.439 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:26.697 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:26.697 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:26.697 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:26.697 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:26.955 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:26.955 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:26.955 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:26.955 18:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:27.213 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:27.213 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:27.213 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.213 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:27.471 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:27.471 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:25:27.471 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:27.730 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:27.988 18:32:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:25:29.362 18:32:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:25:29.362 18:32:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:29.362 18:32:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.362 18:32:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:29.362 18:32:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:29.362 18:32:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:29.362 18:32:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.362 18:32:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:29.620 18:32:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:29.620 18:32:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:29.620 18:32:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.620 18:32:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:29.877 18:32:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:29.878 18:32:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:29.878 18:32:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.878 18:32:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:30.135 18:32:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:30.135 18:32:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:30.135 18:32:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.136 18:32:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:30.394 18:32:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:30.394 18:32:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:30.394 18:32:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.394 18:32:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:30.653 18:32:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:30.653 18:32:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:25:30.653 18:32:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:30.911 18:32:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:31.169 18:32:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:25:32.104 18:32:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:25:32.104 18:32:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:32.104 18:32:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.104 18:32:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:32.362 18:32:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.362 18:32:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:32.362 18:32:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.362 18:32:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:32.620 18:32:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.620 18:32:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:32.620 18:32:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.620 18:32:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:32.878 18:32:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.878 18:32:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:32.878 18:32:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.878 18:32:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:33.136 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:33.136 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:33.136 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:33.136 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:33.394 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:33.394 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:33.394 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:33.395 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:33.653 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:33.653 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:25:33.653 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:33.911 18:32:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:34.169 18:32:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:25:35.140 18:32:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:25:35.140 18:32:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:35.140 18:32:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.140 18:32:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:35.706 18:32:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:35.706 18:32:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:35.706 18:32:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.706 18:32:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:35.965 18:32:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:35.965 18:32:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:35.965 18:32:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.965 18:32:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:36.223 18:32:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:36.223 18:32:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:36.223 18:32:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:36.223 18:32:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:36.481 18:32:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:36.481 18:32:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:36.481 18:32:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:36.481 18:32:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:36.740 18:32:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:36.740 18:32:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:36.740 18:32:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:36.740 18:32:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:36.998 18:32:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:36.998 18:32:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 83849 00:25:36.998 18:32:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 83849 ']' 00:25:36.999 18:32:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 83849 00:25:36.999 18:32:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:25:36.999 18:32:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:36.999 18:32:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83849 00:25:36.999 18:32:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:25:36.999 18:32:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:25:36.999 killing process with pid 83849 00:25:36.999 18:32:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83849' 00:25:36.999 18:32:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 83849 00:25:36.999 18:32:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 83849 00:25:37.972 Connection closed with partial response: 00:25:37.972 00:25:37.972 00:25:38.235 18:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 83849 00:25:38.235 18:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:25:38.235 [2024-07-22 18:32:13.494276] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:25:38.235 [2024-07-22 18:32:13.494496] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83849 ] 00:25:38.235 [2024-07-22 18:32:13.665987] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:38.235 [2024-07-22 18:32:13.973995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:38.235 [2024-07-22 18:32:14.199275] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:25:38.235 Running I/O for 90 seconds... 00:25:38.235 [2024-07-22 18:32:30.322169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:51568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.235 [2024-07-22 18:32:30.322342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:38.235 [2024-07-22 18:32:30.322417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:51576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.235 [2024-07-22 18:32:30.322450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:38.235 [2024-07-22 18:32:30.322492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:51584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.235 [2024-07-22 18:32:30.322520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:38.235 [2024-07-22 18:32:30.322559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:51592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.235 [2024-07-22 18:32:30.322586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:38.235 [2024-07-22 18:32:30.322625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.235 [2024-07-22 18:32:30.322652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:38.235 [2024-07-22 18:32:30.322691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:51608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.235 [2024-07-22 18:32:30.322717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:38.235 [2024-07-22 18:32:30.322755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:51616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.235 [2024-07-22 18:32:30.322781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:38.235 [2024-07-22 18:32:30.322820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:51624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.235 [2024-07-22 18:32:30.322846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:38.235 [2024-07-22 18:32:30.322884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:51632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.235 [2024-07-22 18:32:30.322910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.235 [2024-07-22 18:32:30.322949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:51640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.235 [2024-07-22 18:32:30.322976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.235 [2024-07-22 18:32:30.323014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:51648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.235 [2024-07-22 18:32:30.323066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:38.236 [2024-07-22 18:32:30.323108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:51656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.236 [2024-07-22 18:32:30.323135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:38.236 [2024-07-22 18:32:30.323174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:51664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.236 [2024-07-22 18:32:30.323201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:38.236 [2024-07-22 18:32:30.323261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:51672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.236 [2024-07-22 18:32:30.323288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:38.236 [2024-07-22 18:32:30.323327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:51680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.236 [2024-07-22 18:32:30.323353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:38.236 [2024-07-22 18:32:30.323392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:51688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.236 [2024-07-22 18:32:30.323417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:38.236 [2024-07-22 18:32:30.323455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:51056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.236 [2024-07-22 18:32:30.323481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:38.236 [2024-07-22 18:32:30.323540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:51064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.236 [2024-07-22 18:32:30.323572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:38.236 [2024-07-22 18:32:30.323612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:51072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.236 [2024-07-22 18:32:30.323639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:38.236 [2024-07-22 18:32:30.323677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:51080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.236 [2024-07-22 18:32:30.323703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:38.236 [2024-07-22 18:32:30.323742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:51088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.236 [2024-07-22 18:32:30.323768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:38.236 [2024-07-22 18:32:30.323806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:51096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.236 [2024-07-22 18:32:30.323832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:38.236 [2024-07-22 18:32:30.323869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:51104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.236 [2024-07-22 18:32:30.323909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:38.236 [2024-07-22 18:32:30.323950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:51112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.236 [2024-07-22 18:32:30.323977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:38.236 [2024-07-22 18:32:30.324016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:51120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.236 [2024-07-22 18:32:30.324042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:38.236 [2024-07-22 18:32:30.324080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:51128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.236 [2024-07-22 18:32:30.324107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:38.236 [2024-07-22 18:32:30.324145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:51136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.236 [2024-07-22 18:32:30.324171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:38.236 [2024-07-22 18:32:30.324223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:51144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.236 [2024-07-22 18:32:30.324253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:38.236 [2024-07-22 18:32:30.324321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:51152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.236 [2024-07-22 18:32:30.324353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:38.236 [2024-07-22 18:32:30.324394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:51160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.236 [2024-07-22 18:32:30.324443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:38.236 [2024-07-22 18:32:30.324508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:51168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.236 [2024-07-22 18:32:30.324538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:38.236 [2024-07-22 18:32:30.324579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:51176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.236 [2024-07-22 18:32:30.324606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:38.236 [2024-07-22 18:32:30.324654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:51696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.236 [2024-07-22 18:32:30.324682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:38.236 [2024-07-22 18:32:30.324722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:51704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.236 [2024-07-22 18:32:30.324773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:38.236 [2024-07-22 18:32:30.324813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:51712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.236 [2024-07-22 18:32:30.324855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:38.236 [2024-07-22 18:32:30.324898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:51720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.236 [2024-07-22 18:32:30.324925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:38.236 [2024-07-22 18:32:30.324964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:51728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.236 [2024-07-22 18:32:30.324991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:38.236 [2024-07-22 18:32:30.325030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:51736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.236 [2024-07-22 18:32:30.325056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:38.236 [2024-07-22 18:32:30.325094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:51744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.236 [2024-07-22 18:32:30.325121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:38.236 [2024-07-22 18:32:30.325170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:51752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.236 [2024-07-22 18:32:30.325198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:38.236 [2024-07-22 18:32:30.325258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:51760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.236 [2024-07-22 18:32:30.325288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:38.236 [2024-07-22 18:32:30.325327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:51768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.236 [2024-07-22 18:32:30.325385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:38.236 [2024-07-22 18:32:30.325443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:51776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.236 [2024-07-22 18:32:30.325476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:38.236 [2024-07-22 18:32:30.325529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:51784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.236 [2024-07-22 18:32:30.325565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:38.236 [2024-07-22 18:32:30.325630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:51792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.236 [2024-07-22 18:32:30.325662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:38.236 [2024-07-22 18:32:30.325704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:51800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.236 [2024-07-22 18:32:30.325740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:38.236 [2024-07-22 18:32:30.325795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:51808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.236 [2024-07-22 18:32:30.325834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:38.236 [2024-07-22 18:32:30.325911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:51816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.236 [2024-07-22 18:32:30.325953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:38.236 [2024-07-22 18:32:30.325997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:51184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.236 [2024-07-22 18:32:30.326026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:38.236 [2024-07-22 18:32:30.326065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:51192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.237 [2024-07-22 18:32:30.326091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:38.237 [2024-07-22 18:32:30.326129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:51200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.237 [2024-07-22 18:32:30.326176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:38.237 [2024-07-22 18:32:30.326252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:51208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.237 [2024-07-22 18:32:30.326301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:38.237 [2024-07-22 18:32:30.326354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:51216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.237 [2024-07-22 18:32:30.326402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:38.237 [2024-07-22 18:32:30.326444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:51224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.237 [2024-07-22 18:32:30.326471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:38.237 [2024-07-22 18:32:30.326509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:51232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.237 [2024-07-22 18:32:30.326535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:38.237 [2024-07-22 18:32:30.326573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:51240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.237 [2024-07-22 18:32:30.326599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:38.237 [2024-07-22 18:32:30.326637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:51824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.237 [2024-07-22 18:32:30.326665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:38.237 [2024-07-22 18:32:30.326709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:51832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.237 [2024-07-22 18:32:30.326735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:38.237 [2024-07-22 18:32:30.326774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:51840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.237 [2024-07-22 18:32:30.326801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:38.237 [2024-07-22 18:32:30.326853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:51848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.237 [2024-07-22 18:32:30.326882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:38.237 [2024-07-22 18:32:30.326920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:51856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.237 [2024-07-22 18:32:30.326947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:38.237 [2024-07-22 18:32:30.326987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:51864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.237 [2024-07-22 18:32:30.327014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:38.237 [2024-07-22 18:32:30.327053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:51872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.237 [2024-07-22 18:32:30.327080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:38.237 [2024-07-22 18:32:30.327118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:51880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.237 [2024-07-22 18:32:30.327146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:38.237 [2024-07-22 18:32:30.327202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:51888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.237 [2024-07-22 18:32:30.327252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:38.237 [2024-07-22 18:32:30.327293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:51896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.237 [2024-07-22 18:32:30.327329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:38.237 [2024-07-22 18:32:30.327367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:51904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.237 [2024-07-22 18:32:30.327395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:38.237 [2024-07-22 18:32:30.327434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:51912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.237 [2024-07-22 18:32:30.327461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:38.237 [2024-07-22 18:32:30.327518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:51920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.237 [2024-07-22 18:32:30.327550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:38.237 [2024-07-22 18:32:30.327589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:51928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.237 [2024-07-22 18:32:30.327617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:38.237 [2024-07-22 18:32:30.327656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:51936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.237 [2024-07-22 18:32:30.327682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:38.237 [2024-07-22 18:32:30.327719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:51944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.237 [2024-07-22 18:32:30.327758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:38.237 [2024-07-22 18:32:30.327837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:51248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.237 [2024-07-22 18:32:30.327868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:38.237 [2024-07-22 18:32:30.327917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:51256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.237 [2024-07-22 18:32:30.327944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:38.237 [2024-07-22 18:32:30.327981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:51264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.237 [2024-07-22 18:32:30.328008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:38.237 [2024-07-22 18:32:30.328047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:51272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.237 [2024-07-22 18:32:30.328072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:38.237 [2024-07-22 18:32:30.328110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:51280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.237 [2024-07-22 18:32:30.328136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:38.237 [2024-07-22 18:32:30.328175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:51288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.237 [2024-07-22 18:32:30.328201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:38.237 [2024-07-22 18:32:30.328260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:51296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.237 [2024-07-22 18:32:30.328287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:38.237 [2024-07-22 18:32:30.328325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:51304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.237 [2024-07-22 18:32:30.328367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:38.237 [2024-07-22 18:32:30.328424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:51312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.237 [2024-07-22 18:32:30.328451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:38.237 [2024-07-22 18:32:30.328508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:51320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.237 [2024-07-22 18:32:30.328546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:38.237 [2024-07-22 18:32:30.328603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:51328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.237 [2024-07-22 18:32:30.328632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:38.237 [2024-07-22 18:32:30.328670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:51336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.237 [2024-07-22 18:32:30.328708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:38.237 [2024-07-22 18:32:30.328748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:51344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.237 [2024-07-22 18:32:30.328775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:38.237 [2024-07-22 18:32:30.328812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:51352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.237 [2024-07-22 18:32:30.328838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:38.237 [2024-07-22 18:32:30.328876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:51360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.237 [2024-07-22 18:32:30.328902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:38.237 [2024-07-22 18:32:30.328957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:51368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.237 [2024-07-22 18:32:30.328999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:38.238 [2024-07-22 18:32:30.329041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:51376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.238 [2024-07-22 18:32:30.329081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:38.238 [2024-07-22 18:32:30.329139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:51384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.238 [2024-07-22 18:32:30.329177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:38.238 [2024-07-22 18:32:30.329241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:51392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.238 [2024-07-22 18:32:30.329287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:38.238 [2024-07-22 18:32:30.329337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:51400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.238 [2024-07-22 18:32:30.329365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:38.238 [2024-07-22 18:32:30.329421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:51408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.238 [2024-07-22 18:32:30.329453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:38.238 [2024-07-22 18:32:30.329493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:51416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.238 [2024-07-22 18:32:30.329520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:38.238 [2024-07-22 18:32:30.329558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:51424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.238 [2024-07-22 18:32:30.329585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:38.238 [2024-07-22 18:32:30.329623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:51432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.238 [2024-07-22 18:32:30.329650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:38.238 [2024-07-22 18:32:30.329703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:51952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.238 [2024-07-22 18:32:30.329730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:38.238 [2024-07-22 18:32:30.329769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:51960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.238 [2024-07-22 18:32:30.329815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:38.238 [2024-07-22 18:32:30.329855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:51968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.238 [2024-07-22 18:32:30.329905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:38.238 [2024-07-22 18:32:30.329946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:51976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.238 [2024-07-22 18:32:30.329974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:38.238 [2024-07-22 18:32:30.330012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:51984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.238 [2024-07-22 18:32:30.330039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:38.238 [2024-07-22 18:32:30.330078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:51992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.238 [2024-07-22 18:32:30.330105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:38.238 [2024-07-22 18:32:30.330143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:52000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.238 [2024-07-22 18:32:30.330170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:38.238 [2024-07-22 18:32:30.330224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:52008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.238 [2024-07-22 18:32:30.330253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:38.238 [2024-07-22 18:32:30.330293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:51440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.238 [2024-07-22 18:32:30.330320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:38.238 [2024-07-22 18:32:30.330359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:51448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.238 [2024-07-22 18:32:30.330386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:38.238 [2024-07-22 18:32:30.330425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:51456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.238 [2024-07-22 18:32:30.330451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:38.238 [2024-07-22 18:32:30.330490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:51464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.238 [2024-07-22 18:32:30.330516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:38.238 [2024-07-22 18:32:30.330567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:51472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.238 [2024-07-22 18:32:30.330595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:38.238 [2024-07-22 18:32:30.330634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:51480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.238 [2024-07-22 18:32:30.330661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:38.238 [2024-07-22 18:32:30.330699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:51488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.238 [2024-07-22 18:32:30.330725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:38.238 [2024-07-22 18:32:30.330764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:51496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.238 [2024-07-22 18:32:30.330801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:38.238 [2024-07-22 18:32:30.330840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:51504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.238 [2024-07-22 18:32:30.330867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:38.238 [2024-07-22 18:32:30.330906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:51512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.238 [2024-07-22 18:32:30.330932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:38.238 [2024-07-22 18:32:30.330970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:51520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.238 [2024-07-22 18:32:30.330998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:38.238 [2024-07-22 18:32:30.331046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:51528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.238 [2024-07-22 18:32:30.331082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:38.238 [2024-07-22 18:32:30.331121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:51536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.238 [2024-07-22 18:32:30.331147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:38.238 [2024-07-22 18:32:30.331185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:51544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.238 [2024-07-22 18:32:30.331226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:38.238 [2024-07-22 18:32:30.331268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:51552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.238 [2024-07-22 18:32:30.331295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:38.238 [2024-07-22 18:32:30.333492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:51560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.238 [2024-07-22 18:32:30.333541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:38.238 [2024-07-22 18:32:30.333597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:52016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.238 [2024-07-22 18:32:30.333650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:38.238 [2024-07-22 18:32:30.333695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:52024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.238 [2024-07-22 18:32:30.333723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:38.238 [2024-07-22 18:32:30.333764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:52032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.238 [2024-07-22 18:32:30.333791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:38.238 [2024-07-22 18:32:30.333830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:52040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.238 [2024-07-22 18:32:30.333870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:38.238 [2024-07-22 18:32:30.333919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:52048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.238 [2024-07-22 18:32:30.333947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:38.238 [2024-07-22 18:32:30.333988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:52056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.238 [2024-07-22 18:32:30.334015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:38.238 [2024-07-22 18:32:30.334055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:52064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.239 [2024-07-22 18:32:30.334083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:38.239 [2024-07-22 18:32:30.334150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:52072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.239 [2024-07-22 18:32:30.334184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:38.239 [2024-07-22 18:32:30.334242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:51568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.239 [2024-07-22 18:32:30.334273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:38.239 [2024-07-22 18:32:30.334313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:51576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.239 [2024-07-22 18:32:30.334340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:38.239 [2024-07-22 18:32:30.334378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:51584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.239 [2024-07-22 18:32:30.334428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:38.239 [2024-07-22 18:32:30.334466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:51592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.239 [2024-07-22 18:32:30.334493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:38.239 [2024-07-22 18:32:30.334532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:51600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.239 [2024-07-22 18:32:30.334571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:38.239 [2024-07-22 18:32:30.334613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:51608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.239 [2024-07-22 18:32:30.334641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:38.239 [2024-07-22 18:32:30.334680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:51616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.239 [2024-07-22 18:32:30.334707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:38.239 [2024-07-22 18:32:30.334750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:51624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.239 [2024-07-22 18:32:30.334779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:38.239 [2024-07-22 18:32:30.334819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:51632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.239 [2024-07-22 18:32:30.334846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.239 [2024-07-22 18:32:30.334885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:51640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.239 [2024-07-22 18:32:30.334912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.239 [2024-07-22 18:32:30.334949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:51648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.239 [2024-07-22 18:32:30.334976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:38.239 [2024-07-22 18:32:30.335015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:51656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.239 [2024-07-22 18:32:30.335042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:38.239 [2024-07-22 18:32:30.335093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:51664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.239 [2024-07-22 18:32:30.335120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:38.239 [2024-07-22 18:32:30.335158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:51672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.239 [2024-07-22 18:32:30.335185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:38.239 [2024-07-22 18:32:30.335239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:51680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.239 [2024-07-22 18:32:30.335269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:38.239 [2024-07-22 18:32:30.335326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:51688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.239 [2024-07-22 18:32:30.335359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:38.239 [2024-07-22 18:32:30.335399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:51056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.239 [2024-07-22 18:32:30.335435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:38.239 [2024-07-22 18:32:30.335488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:51064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.239 [2024-07-22 18:32:30.335515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:38.239 [2024-07-22 18:32:30.335554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:51072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.239 [2024-07-22 18:32:30.335582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:38.239 [2024-07-22 18:32:30.335620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:51080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.239 [2024-07-22 18:32:30.335646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:38.239 [2024-07-22 18:32:30.335684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:51088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.239 [2024-07-22 18:32:30.335710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:38.239 [2024-07-22 18:32:30.335748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:51096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.239 [2024-07-22 18:32:30.335774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:38.239 [2024-07-22 18:32:30.335813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:51104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.239 [2024-07-22 18:32:30.335839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:38.239 [2024-07-22 18:32:30.335876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:51112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.239 [2024-07-22 18:32:30.335902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:38.239 [2024-07-22 18:32:30.335941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:51120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.239 [2024-07-22 18:32:30.335968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:38.239 [2024-07-22 18:32:30.336005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:51128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.239 [2024-07-22 18:32:30.336031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:38.239 [2024-07-22 18:32:30.336081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:51136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.239 [2024-07-22 18:32:30.336107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:38.239 [2024-07-22 18:32:30.336150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:51144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.239 [2024-07-22 18:32:30.336177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:38.239 [2024-07-22 18:32:30.336230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:51152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.239 [2024-07-22 18:32:30.336259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:38.239 [2024-07-22 18:32:30.336309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:51160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.239 [2024-07-22 18:32:30.336337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:38.239 [2024-07-22 18:32:30.336376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:51168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.239 [2024-07-22 18:32:30.336402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:38.239 [2024-07-22 18:32:30.336440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:51176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.239 [2024-07-22 18:32:30.336467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:38.239 [2024-07-22 18:32:30.336506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:51696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.239 [2024-07-22 18:32:30.336532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:38.239 [2024-07-22 18:32:30.336570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:51704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.239 [2024-07-22 18:32:30.336616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:38.239 [2024-07-22 18:32:30.336656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:51712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.239 [2024-07-22 18:32:30.336683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:38.239 [2024-07-22 18:32:30.336732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:51720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.239 [2024-07-22 18:32:30.336760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:38.239 [2024-07-22 18:32:30.336797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:51728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.239 [2024-07-22 18:32:30.336824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:38.239 [2024-07-22 18:32:30.336863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:51736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.240 [2024-07-22 18:32:30.336889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:38.240 [2024-07-22 18:32:30.336928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:51744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.240 [2024-07-22 18:32:30.336954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:38.240 [2024-07-22 18:32:30.336999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:51752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.240 [2024-07-22 18:32:30.337028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:38.240 [2024-07-22 18:32:30.337066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:51760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.240 [2024-07-22 18:32:30.337093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:38.240 [2024-07-22 18:32:30.337131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:51768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.240 [2024-07-22 18:32:30.337169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:38.240 [2024-07-22 18:32:30.337228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:51776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.240 [2024-07-22 18:32:30.337258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:38.240 [2024-07-22 18:32:30.337297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:51784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.240 [2024-07-22 18:32:30.337324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:38.240 [2024-07-22 18:32:30.337363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:51792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.240 [2024-07-22 18:32:30.337389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:38.240 [2024-07-22 18:32:30.337438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:51800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.240 [2024-07-22 18:32:30.337464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:38.240 [2024-07-22 18:32:30.337502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:51808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.240 [2024-07-22 18:32:30.337528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:38.240 [2024-07-22 18:32:30.337567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:51816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.240 [2024-07-22 18:32:30.337594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:38.240 [2024-07-22 18:32:30.337632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:51184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.240 [2024-07-22 18:32:30.337658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:38.240 [2024-07-22 18:32:30.337696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:51192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.240 [2024-07-22 18:32:30.337722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:38.240 [2024-07-22 18:32:30.337760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:51200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.240 [2024-07-22 18:32:30.337786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:38.240 [2024-07-22 18:32:30.337823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:51208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.240 [2024-07-22 18:32:30.337850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:38.240 [2024-07-22 18:32:30.337912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:51216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.240 [2024-07-22 18:32:30.337939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:38.240 [2024-07-22 18:32:30.337977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:51224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.240 [2024-07-22 18:32:30.338015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:38.240 [2024-07-22 18:32:30.338055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:51232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.240 [2024-07-22 18:32:30.338082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:38.240 [2024-07-22 18:32:30.338120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:51240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.240 [2024-07-22 18:32:30.338156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:38.240 [2024-07-22 18:32:30.338193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:51824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.240 [2024-07-22 18:32:30.338235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:38.240 [2024-07-22 18:32:30.338275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:51832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.240 [2024-07-22 18:32:30.338302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:38.240 [2024-07-22 18:32:30.338340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:51840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.240 [2024-07-22 18:32:30.338366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:38.240 [2024-07-22 18:32:30.338414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:51848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.240 [2024-07-22 18:32:30.338446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:38.240 [2024-07-22 18:32:30.338485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:51856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.240 [2024-07-22 18:32:30.338511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:38.240 [2024-07-22 18:32:30.338549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:51864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.240 [2024-07-22 18:32:30.338575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:38.240 [2024-07-22 18:32:30.338613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:51872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.240 [2024-07-22 18:32:30.338640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:38.240 [2024-07-22 18:32:30.338678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:51880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.240 [2024-07-22 18:32:30.338705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:38.240 [2024-07-22 18:32:30.338743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:51888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.240 [2024-07-22 18:32:30.338769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:38.240 [2024-07-22 18:32:30.338808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:51896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.240 [2024-07-22 18:32:30.338834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:38.240 [2024-07-22 18:32:30.338883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:51904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.240 [2024-07-22 18:32:30.338910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:38.240 [2024-07-22 18:32:30.338948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:51912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.240 [2024-07-22 18:32:30.338974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:38.240 [2024-07-22 18:32:30.339012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:51920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.241 [2024-07-22 18:32:30.339039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:38.241 [2024-07-22 18:32:30.339089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:51928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.241 [2024-07-22 18:32:30.339115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:38.241 [2024-07-22 18:32:30.339153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:51936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.241 [2024-07-22 18:32:30.339179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:38.241 [2024-07-22 18:32:30.339238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:51944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.241 [2024-07-22 18:32:30.339269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:38.241 [2024-07-22 18:32:30.339307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:51248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.241 [2024-07-22 18:32:30.339334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:38.241 [2024-07-22 18:32:30.339381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:51256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.241 [2024-07-22 18:32:30.339409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:38.241 [2024-07-22 18:32:30.339447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:51264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.241 [2024-07-22 18:32:30.339473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:38.241 [2024-07-22 18:32:30.339523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:51272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.241 [2024-07-22 18:32:30.339550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:38.241 [2024-07-22 18:32:30.339588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:51280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.241 [2024-07-22 18:32:30.339614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:38.241 [2024-07-22 18:32:30.339651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:51288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.241 [2024-07-22 18:32:30.339678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:38.241 [2024-07-22 18:32:30.339727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:51296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.241 [2024-07-22 18:32:30.339755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:38.241 [2024-07-22 18:32:30.339796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:51304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.241 [2024-07-22 18:32:30.339823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:38.241 [2024-07-22 18:32:30.339861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:51312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.241 [2024-07-22 18:32:30.339887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:38.241 [2024-07-22 18:32:30.339925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:51320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.241 [2024-07-22 18:32:30.339952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:38.241 [2024-07-22 18:32:30.339989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:51328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.241 [2024-07-22 18:32:30.340016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:38.241 [2024-07-22 18:32:30.340053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:51336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.241 [2024-07-22 18:32:30.340083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:38.241 [2024-07-22 18:32:30.340122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:51344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.241 [2024-07-22 18:32:30.340148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:38.241 [2024-07-22 18:32:30.340186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:51352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.241 [2024-07-22 18:32:30.340235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:38.241 [2024-07-22 18:32:30.340276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:51360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.241 [2024-07-22 18:32:30.340303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:38.241 [2024-07-22 18:32:30.340340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:51368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.241 [2024-07-22 18:32:30.340367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:38.241 [2024-07-22 18:32:30.340405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:51376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.241 [2024-07-22 18:32:30.340432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:38.241 [2024-07-22 18:32:30.340470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:51384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.241 [2024-07-22 18:32:30.340497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:38.241 [2024-07-22 18:32:30.340535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:51392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.241 [2024-07-22 18:32:30.340572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:38.241 [2024-07-22 18:32:30.340612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:51400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.241 [2024-07-22 18:32:30.340639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:38.241 [2024-07-22 18:32:30.340677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:51408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.241 [2024-07-22 18:32:30.340704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:38.241 [2024-07-22 18:32:30.340742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:51416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.241 [2024-07-22 18:32:30.340768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:38.241 [2024-07-22 18:32:30.340806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:51424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.241 [2024-07-22 18:32:30.340832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:38.241 [2024-07-22 18:32:30.340871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:51432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.241 [2024-07-22 18:32:30.340897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:38.241 [2024-07-22 18:32:30.340935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:51952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.241 [2024-07-22 18:32:30.340962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:38.241 [2024-07-22 18:32:30.341000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:51960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.241 [2024-07-22 18:32:30.341044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:38.241 [2024-07-22 18:32:30.341084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:51968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.241 [2024-07-22 18:32:30.341111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:38.241 [2024-07-22 18:32:30.341150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:51976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.241 [2024-07-22 18:32:30.341178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:38.241 [2024-07-22 18:32:30.341234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:51984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.241 [2024-07-22 18:32:30.341264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:38.241 [2024-07-22 18:32:30.341303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:51992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.241 [2024-07-22 18:32:30.341330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:38.241 [2024-07-22 18:32:30.341368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:52000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.241 [2024-07-22 18:32:30.341415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:38.241 [2024-07-22 18:32:30.341455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:52008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.241 [2024-07-22 18:32:30.341482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:38.241 [2024-07-22 18:32:30.341520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:51440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.241 [2024-07-22 18:32:30.341553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:38.241 [2024-07-22 18:32:30.341592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.241 [2024-07-22 18:32:30.341618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:38.242 [2024-07-22 18:32:30.341662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:51456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.242 [2024-07-22 18:32:30.341692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:38.242 [2024-07-22 18:32:30.341734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:51464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.242 [2024-07-22 18:32:30.341764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:38.242 [2024-07-22 18:32:30.341805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:51472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.242 [2024-07-22 18:32:30.341834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:38.242 [2024-07-22 18:32:30.341902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:51480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.242 [2024-07-22 18:32:30.341942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:38.242 [2024-07-22 18:32:30.341982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:51488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.242 [2024-07-22 18:32:30.342009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:38.242 [2024-07-22 18:32:30.342047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:51496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.242 [2024-07-22 18:32:30.342073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:38.242 [2024-07-22 18:32:30.342111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:51504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.242 [2024-07-22 18:32:30.342137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:38.242 [2024-07-22 18:32:30.342176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:51512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.242 [2024-07-22 18:32:30.342202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:38.242 [2024-07-22 18:32:30.342258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:51520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.242 [2024-07-22 18:32:30.342285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:38.242 [2024-07-22 18:32:30.342335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:51528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.242 [2024-07-22 18:32:30.342364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:38.242 [2024-07-22 18:32:30.342414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:51536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.242 [2024-07-22 18:32:30.342440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:38.242 [2024-07-22 18:32:30.342479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:51544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.242 [2024-07-22 18:32:30.342516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:38.242 [2024-07-22 18:32:30.343252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:51552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.242 [2024-07-22 18:32:30.343297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:38.242 [2024-07-22 18:32:46.119000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:72248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.242 [2024-07-22 18:32:46.119101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:38.242 [2024-07-22 18:32:46.119190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:72288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.242 [2024-07-22 18:32:46.119249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:38.242 [2024-07-22 18:32:46.119299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:72312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.242 [2024-07-22 18:32:46.119333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:38.242 [2024-07-22 18:32:46.119369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:72648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.242 [2024-07-22 18:32:46.119399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:38.242 [2024-07-22 18:32:46.119835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:72664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.242 [2024-07-22 18:32:46.119869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:38.242 [2024-07-22 18:32:46.119905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:72328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.242 [2024-07-22 18:32:46.119929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:38.242 [2024-07-22 18:32:46.119961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:72360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.242 [2024-07-22 18:32:46.119983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:38.242 [2024-07-22 18:32:46.120015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:72392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.242 [2024-07-22 18:32:46.120036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:38.242 [2024-07-22 18:32:46.120098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:72416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.242 [2024-07-22 18:32:46.120122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:38.242 [2024-07-22 18:32:46.120163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:71968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.242 [2024-07-22 18:32:46.120184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:38.242 [2024-07-22 18:32:46.120232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:72000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.242 [2024-07-22 18:32:46.120256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:38.242 [2024-07-22 18:32:46.120301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:72688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.242 [2024-07-22 18:32:46.120325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:38.242 [2024-07-22 18:32:46.120357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:72704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.242 [2024-07-22 18:32:46.120379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:38.242 [2024-07-22 18:32:46.120410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:72720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.242 [2024-07-22 18:32:46.120432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:38.242 [2024-07-22 18:32:46.120463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:72736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.242 [2024-07-22 18:32:46.120484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:38.242 [2024-07-22 18:32:46.120515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:72752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.242 [2024-07-22 18:32:46.120536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:38.242 [2024-07-22 18:32:46.120575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:72768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.242 [2024-07-22 18:32:46.120597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:38.242 [2024-07-22 18:32:46.120630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:72032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.242 [2024-07-22 18:32:46.120652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:38.242 [2024-07-22 18:32:46.120684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:72064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.242 [2024-07-22 18:32:46.120714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:38.242 [2024-07-22 18:32:46.120757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:72096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.242 [2024-07-22 18:32:46.120785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:38.242 [2024-07-22 18:32:46.120845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:72784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.242 [2024-07-22 18:32:46.120879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:38.242 [2024-07-22 18:32:46.120945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:72800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.242 [2024-07-22 18:32:46.120981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:38.242 [2024-07-22 18:32:46.121017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:72448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.242 [2024-07-22 18:32:46.121040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:38.242 [2024-07-22 18:32:46.121071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:72480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.242 [2024-07-22 18:32:46.121093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:38.242 [2024-07-22 18:32:46.121132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:72512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.242 [2024-07-22 18:32:46.121157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:38.242 [2024-07-22 18:32:46.121189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:72544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.242 [2024-07-22 18:32:46.121226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:38.243 [2024-07-22 18:32:46.121281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:72120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.243 [2024-07-22 18:32:46.121304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:38.243 [2024-07-22 18:32:46.121337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:72816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.243 [2024-07-22 18:32:46.121359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:38.243 [2024-07-22 18:32:46.121391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:72832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.243 [2024-07-22 18:32:46.121416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:38.243 [2024-07-22 18:32:46.121454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.243 [2024-07-22 18:32:46.121476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:38.243 [2024-07-22 18:32:46.121509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:72864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.243 [2024-07-22 18:32:46.121530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.243 [2024-07-22 18:32:46.121563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:72880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.243 [2024-07-22 18:32:46.121585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.243 [2024-07-22 18:32:46.121617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:72168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.243 [2024-07-22 18:32:46.121664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:38.243 [2024-07-22 18:32:46.121711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:72896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.243 [2024-07-22 18:32:46.121736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:38.243 [2024-07-22 18:32:46.121769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:72912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.243 [2024-07-22 18:32:46.121790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:38.243 [2024-07-22 18:32:46.121821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:72592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.243 [2024-07-22 18:32:46.121843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:38.243 [2024-07-22 18:32:46.121896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:72624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.243 [2024-07-22 18:32:46.121921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:38.243 [2024-07-22 18:32:46.121954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:72656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.243 [2024-07-22 18:32:46.121976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:38.243 [2024-07-22 18:32:46.122008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:72184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.243 [2024-07-22 18:32:46.122030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:38.243 [2024-07-22 18:32:46.122061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:72208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.243 [2024-07-22 18:32:46.122082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:38.243 [2024-07-22 18:32:46.122113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:72240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.243 [2024-07-22 18:32:46.122134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:38.243 [2024-07-22 18:32:46.122165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:72680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.243 [2024-07-22 18:32:46.122186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:38.243 [2024-07-22 18:32:46.122236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.243 [2024-07-22 18:32:46.122260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:38.243 [2024-07-22 18:32:46.122315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:72920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.243 [2024-07-22 18:32:46.122342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:38.243 [2024-07-22 18:32:46.122374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:72936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.243 [2024-07-22 18:32:46.122418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:38.243 [2024-07-22 18:32:46.122479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:72272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.243 [2024-07-22 18:32:46.122502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:38.243 [2024-07-22 18:32:46.122533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:72296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.243 [2024-07-22 18:32:46.122555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:38.243 [2024-07-22 18:32:46.122585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:72336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.243 [2024-07-22 18:32:46.122608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:38.243 [2024-07-22 18:32:46.122639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:72960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.243 [2024-07-22 18:32:46.122661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:38.243 [2024-07-22 18:32:46.122692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:72976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.243 [2024-07-22 18:32:46.122713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:38.243 [2024-07-22 18:32:46.122745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:72992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.243 [2024-07-22 18:32:46.122766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:38.243 [2024-07-22 18:32:46.122798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:73008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.243 [2024-07-22 18:32:46.122819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:38.243 [2024-07-22 18:32:46.122850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:73024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.243 [2024-07-22 18:32:46.122872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:38.243 [2024-07-22 18:32:46.122904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:72744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.243 [2024-07-22 18:32:46.122926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:38.243 [2024-07-22 18:32:46.122957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:72776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.243 [2024-07-22 18:32:46.122978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:38.243 [2024-07-22 18:32:46.123009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:72808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.243 [2024-07-22 18:32:46.123030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:38.243 [2024-07-22 18:32:46.123061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:72368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.243 [2024-07-22 18:32:46.123083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:38.243 [2024-07-22 18:32:46.123123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:73040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.243 [2024-07-22 18:32:46.123146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:38.243 [2024-07-22 18:32:46.123178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:73056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.243 [2024-07-22 18:32:46.123200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:38.243 [2024-07-22 18:32:46.123250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:73072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.243 [2024-07-22 18:32:46.123273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:38.243 [2024-07-22 18:32:46.123305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:73088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.243 [2024-07-22 18:32:46.123329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:38.243 [2024-07-22 18:32:46.123375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:73104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.243 [2024-07-22 18:32:46.123409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:38.243 [2024-07-22 18:32:46.123455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:73120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.243 [2024-07-22 18:32:46.123487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:38.243 [2024-07-22 18:32:46.123532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:72424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.243 [2024-07-22 18:32:46.123559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:38.243 [2024-07-22 18:32:46.123592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:72456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.244 [2024-07-22 18:32:46.123615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:38.244 [2024-07-22 18:32:46.123646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.244 [2024-07-22 18:32:46.123667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:38.244 [2024-07-22 18:32:46.123699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:72520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.244 [2024-07-22 18:32:46.123720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:38.244 [2024-07-22 18:32:46.125775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:72840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.244 [2024-07-22 18:32:46.125826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:38.244 [2024-07-22 18:32:46.125899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:73128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.244 [2024-07-22 18:32:46.125937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:38.244 [2024-07-22 18:32:46.126002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:73144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.244 [2024-07-22 18:32:46.126033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:38.244 [2024-07-22 18:32:46.126067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:73160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.244 [2024-07-22 18:32:46.126090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:38.244 [2024-07-22 18:32:46.126122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:73176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.244 [2024-07-22 18:32:46.126144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:38.244 [2024-07-22 18:32:46.126177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:72872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.244 [2024-07-22 18:32:46.126198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:38.244 [2024-07-22 18:32:46.126246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:72904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.244 [2024-07-22 18:32:46.126269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:38.244 [2024-07-22 18:32:46.126302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:73200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.244 [2024-07-22 18:32:46.126323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:38.244 [2024-07-22 18:32:46.126356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:73216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.244 [2024-07-22 18:32:46.126378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:38.244 Received shutdown signal, test time was about 33.510363 seconds 00:25:38.244 00:25:38.244 Latency(us) 00:25:38.244 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:38.244 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:38.244 Verification LBA range: start 0x0 length 0x4000 00:25:38.244 Nvme0n1 : 33.51 6700.85 26.18 0.00 0.00 19071.39 171.29 4057035.87 00:25:38.244 =================================================================================================================== 00:25:38.244 Total : 6700.85 26.18 0.00 0.00 19071.39 171.29 4057035.87 00:25:38.244 18:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:38.502 18:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:25:38.502 18:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:25:38.502 18:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:25:38.502 18:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:38.502 18:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:25:38.761 18:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:38.761 18:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:25:38.761 18:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:38.761 18:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:38.761 rmmod nvme_tcp 00:25:38.761 rmmod nvme_fabrics 00:25:38.761 rmmod nvme_keyring 00:25:38.761 18:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:38.761 18:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:25:38.761 18:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:25:38.761 18:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 83793 ']' 00:25:38.761 18:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 83793 00:25:38.761 18:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 83793 ']' 00:25:38.761 18:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 83793 00:25:38.761 18:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:25:38.761 18:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:38.761 18:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83793 00:25:38.761 18:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:38.761 18:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:38.761 killing process with pid 83793 00:25:38.761 18:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83793' 00:25:38.761 18:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 83793 00:25:38.761 18:32:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 83793 00:25:40.137 18:32:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:40.137 18:32:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:40.137 18:32:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:40.137 18:32:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:40.137 18:32:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:40.137 18:32:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:40.137 18:32:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:40.137 18:32:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:40.137 18:32:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:40.137 00:25:40.137 real 0m41.835s 00:25:40.137 user 2m12.890s 00:25:40.137 sys 0m11.425s 00:25:40.137 18:32:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:40.137 18:32:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:40.137 ************************************ 00:25:40.137 END TEST nvmf_host_multipath_status 00:25:40.137 ************************************ 00:25:40.137 18:32:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:25:40.137 18:32:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:40.137 18:32:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:40.137 18:32:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:40.137 18:32:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.137 ************************************ 00:25:40.137 START TEST nvmf_discovery_remove_ifc 00:25:40.137 ************************************ 00:25:40.137 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:40.396 * Looking for test storage... 00:25:40.396 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:40.396 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:40.396 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:25:40.396 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:40.396 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:40.396 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:40.396 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:40.396 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:40.396 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:40.396 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:40.396 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:40.396 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:40.396 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:40.396 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:25:40.396 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=1e224894-a0fc-4112-b81b-a37606f50c96 00:25:40.396 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:40.396 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:40.396 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:40.396 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:40.396 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:40.396 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:40.396 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:40.396 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:40.396 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.396 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.396 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.396 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:25:40.396 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.396 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:25:40.396 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:40.396 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:40.396 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:40.396 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:40.396 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:40.396 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:40.396 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:40.396 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:40.396 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:25:40.396 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:25:40.396 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:25:40.396 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:40.396 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:25:40.397 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:25:40.397 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:25:40.397 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:40.397 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:40.397 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:40.397 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:40.397 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:40.397 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:40.397 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:40.397 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:40.397 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:25:40.397 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:25:40.397 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:25:40.397 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:25:40.397 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:25:40.397 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:25:40.397 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:40.397 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:40.397 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:40.397 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:40.397 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:40.397 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:40.397 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:40.397 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:40.397 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:40.397 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:40.397 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:40.397 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:40.397 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:40.397 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:40.397 Cannot find device "nvmf_tgt_br" 00:25:40.397 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:25:40.397 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:40.397 Cannot find device "nvmf_tgt_br2" 00:25:40.397 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:25:40.397 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:40.397 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:40.397 Cannot find device "nvmf_tgt_br" 00:25:40.397 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:25:40.397 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:40.397 Cannot find device "nvmf_tgt_br2" 00:25:40.397 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:25:40.397 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:40.397 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:40.397 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:40.397 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:40.397 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:25:40.397 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:40.397 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:40.397 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:25:40.397 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:40.397 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:40.397 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:40.656 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:40.656 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:40.656 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:40.656 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:40.656 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:40.656 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:40.656 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:40.656 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:40.656 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:40.656 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:40.656 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:40.656 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:40.656 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:40.656 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:40.656 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:40.656 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:40.656 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:40.656 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:40.656 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:40.656 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:40.656 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:40.656 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:40.656 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:25:40.656 00:25:40.656 --- 10.0.0.2 ping statistics --- 00:25:40.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:40.656 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:25:40.656 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:40.656 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:40.656 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:25:40.656 00:25:40.656 --- 10.0.0.3 ping statistics --- 00:25:40.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:40.656 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:25:40.656 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:40.656 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:40.656 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:25:40.656 00:25:40.656 --- 10.0.0.1 ping statistics --- 00:25:40.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:40.656 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:25:40.656 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:40.656 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:25:40.656 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:40.656 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:40.656 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:40.656 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:40.656 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:40.656 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:40.656 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:40.656 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:25:40.656 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:40.656 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:40.656 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:40.656 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=84636 00:25:40.656 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:40.656 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 84636 00:25:40.656 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 84636 ']' 00:25:40.656 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:40.656 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:40.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:40.656 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:40.656 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:40.656 18:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:40.914 [2024-07-22 18:32:52.746249] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:25:40.914 [2024-07-22 18:32:52.746458] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:40.914 [2024-07-22 18:32:52.929387] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.478 [2024-07-22 18:32:53.260831] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:41.478 [2024-07-22 18:32:53.260923] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:41.478 [2024-07-22 18:32:53.260943] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:41.478 [2024-07-22 18:32:53.260960] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:41.478 [2024-07-22 18:32:53.260975] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:41.478 [2024-07-22 18:32:53.261037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:41.737 [2024-07-22 18:32:53.496938] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:25:41.996 18:32:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:41.996 18:32:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:25:41.996 18:32:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:41.996 18:32:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:41.996 18:32:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:41.996 18:32:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:41.996 18:32:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:25:41.996 18:32:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.996 18:32:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:41.996 [2024-07-22 18:32:53.813358] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:41.996 [2024-07-22 18:32:53.821856] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:41.996 null0 00:25:41.996 [2024-07-22 18:32:53.854007] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:41.996 18:32:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.996 18:32:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=84668 00:25:41.996 18:32:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 84668 /tmp/host.sock 00:25:41.996 18:32:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:25:41.996 18:32:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 84668 ']' 00:25:41.996 18:32:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:25:41.996 18:32:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:41.996 18:32:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:41.996 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:41.996 18:32:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:41.996 18:32:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:41.996 [2024-07-22 18:32:54.008991] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:25:41.996 [2024-07-22 18:32:54.009257] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84668 ] 00:25:42.254 [2024-07-22 18:32:54.241546] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:42.821 [2024-07-22 18:32:54.548570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:43.080 18:32:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:43.080 18:32:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:25:43.080 18:32:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:43.080 18:32:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:25:43.080 18:32:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.080 18:32:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:43.080 18:32:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.080 18:32:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:25:43.080 18:32:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.080 18:32:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:43.339 [2024-07-22 18:32:55.153017] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:25:43.339 18:32:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.339 18:32:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:25:43.339 18:32:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.339 18:32:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:44.273 [2024-07-22 18:32:56.288960] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:44.273 [2024-07-22 18:32:56.289017] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:44.273 [2024-07-22 18:32:56.289060] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:44.532 [2024-07-22 18:32:56.295044] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:44.532 [2024-07-22 18:32:56.361372] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:44.532 [2024-07-22 18:32:56.361484] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:44.532 [2024-07-22 18:32:56.361555] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:44.532 [2024-07-22 18:32:56.361584] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:44.532 [2024-07-22 18:32:56.361627] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:44.532 18:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.532 18:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:25:44.532 18:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:44.532 18:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:44.532 18:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:44.532 [2024-07-22 18:32:56.367785] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x61500002b000 was disconnected and freed. delete nvme_qpair. 00:25:44.532 18:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:44.532 18:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.532 18:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:44.532 18:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:44.532 18:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.532 18:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:25:44.532 18:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:25:44.532 18:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:25:44.532 18:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:25:44.532 18:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:44.532 18:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:44.532 18:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:44.532 18:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.532 18:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:44.532 18:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:44.532 18:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:44.532 18:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.532 18:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:44.532 18:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:45.570 18:32:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:45.570 18:32:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:45.570 18:32:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:45.570 18:32:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.570 18:32:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:45.570 18:32:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:45.570 18:32:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:45.570 18:32:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.570 18:32:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:45.570 18:32:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:46.941 18:32:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:46.941 18:32:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:46.941 18:32:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:46.941 18:32:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.941 18:32:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:46.941 18:32:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:46.941 18:32:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:46.941 18:32:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.941 18:32:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:46.941 18:32:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:47.872 18:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:47.872 18:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:47.872 18:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.872 18:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:47.872 18:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:47.872 18:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:47.872 18:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:47.872 18:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.872 18:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:47.872 18:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:48.831 18:33:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:48.831 18:33:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:48.831 18:33:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.831 18:33:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:48.831 18:33:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:48.831 18:33:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:48.831 18:33:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:48.831 18:33:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.831 18:33:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:48.831 18:33:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:49.765 18:33:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:49.765 18:33:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:49.765 18:33:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:49.765 18:33:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.765 18:33:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:49.765 18:33:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:49.765 18:33:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:49.765 18:33:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.022 18:33:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:50.022 18:33:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:50.022 [2024-07-22 18:33:01.798974] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:25:50.022 [2024-07-22 18:33:01.799078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.022 [2024-07-22 18:33:01.799104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.022 [2024-07-22 18:33:01.799126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.022 [2024-07-22 18:33:01.799140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.022 [2024-07-22 18:33:01.799155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.022 [2024-07-22 18:33:01.799169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.022 [2024-07-22 18:33:01.799190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.022 [2024-07-22 18:33:01.799216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.022 [2024-07-22 18:33:01.799234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.022 [2024-07-22 18:33:01.799248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.022 [2024-07-22 18:33:01.799262] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(5) to be set 00:25:50.022 [2024-07-22 18:33:01.808958] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:25:50.022 [2024-07-22 18:33:01.818989] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:50.959 18:33:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:50.959 18:33:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:50.959 18:33:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.959 18:33:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:50.959 18:33:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:50.959 18:33:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:50.959 18:33:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:50.959 [2024-07-22 18:33:02.867384] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:25:50.959 [2024-07-22 18:33:02.867563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ad80 with addr=10.0.0.2, port=4420 00:25:50.959 [2024-07-22 18:33:02.867616] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(5) to be set 00:25:50.959 [2024-07-22 18:33:02.867722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:25:50.960 [2024-07-22 18:33:02.869058] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:50.960 [2024-07-22 18:33:02.869162] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:50.960 [2024-07-22 18:33:02.869198] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:50.960 [2024-07-22 18:33:02.869270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:50.960 [2024-07-22 18:33:02.869369] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.960 [2024-07-22 18:33:02.869418] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:50.960 18:33:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.960 18:33:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:50.960 18:33:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:51.895 [2024-07-22 18:33:03.869520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:51.895 [2024-07-22 18:33:03.869614] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:51.895 [2024-07-22 18:33:03.869650] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:51.895 [2024-07-22 18:33:03.869667] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:25:51.895 [2024-07-22 18:33:03.869703] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.895 [2024-07-22 18:33:03.869752] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:25:51.895 [2024-07-22 18:33:03.869817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:51.895 [2024-07-22 18:33:03.869849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.895 [2024-07-22 18:33:03.869882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:51.895 [2024-07-22 18:33:03.869903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.895 [2024-07-22 18:33:03.869918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:51.895 [2024-07-22 18:33:03.869932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.895 [2024-07-22 18:33:03.869946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:51.895 [2024-07-22 18:33:03.869960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.895 [2024-07-22 18:33:03.869974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:51.895 [2024-07-22 18:33:03.869987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.895 [2024-07-22 18:33:03.870001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:25:51.895 [2024-07-22 18:33:03.870067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:25:51.895 [2024-07-22 18:33:03.871051] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:25:51.895 [2024-07-22 18:33:03.871096] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:25:51.895 18:33:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:51.895 18:33:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:51.895 18:33:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.896 18:33:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:51.896 18:33:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:51.896 18:33:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:51.896 18:33:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:52.154 18:33:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.154 18:33:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:25:52.155 18:33:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:52.155 18:33:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:52.155 18:33:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:25:52.155 18:33:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:52.155 18:33:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:52.155 18:33:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:52.155 18:33:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:52.155 18:33:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.155 18:33:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:52.155 18:33:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:52.155 18:33:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.155 18:33:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:52.155 18:33:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:53.092 18:33:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:53.092 18:33:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:53.092 18:33:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.092 18:33:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:53.092 18:33:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:53.092 18:33:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:53.092 18:33:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:53.092 18:33:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.092 18:33:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:53.092 18:33:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:54.028 [2024-07-22 18:33:05.885513] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:54.028 [2024-07-22 18:33:05.885566] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:54.028 [2024-07-22 18:33:05.885621] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:54.028 [2024-07-22 18:33:05.891608] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:25:54.028 [2024-07-22 18:33:05.957941] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:54.028 [2024-07-22 18:33:05.958045] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:54.028 [2024-07-22 18:33:05.958116] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:54.028 [2024-07-22 18:33:05.958145] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:25:54.028 [2024-07-22 18:33:05.958163] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:54.028 [2024-07-22 18:33:05.964607] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x61500002b780 was disconnected and freed. delete nvme_qpair. 00:25:54.288 18:33:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:54.288 18:33:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:54.288 18:33:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.288 18:33:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:54.288 18:33:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:54.288 18:33:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:54.288 18:33:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:54.288 18:33:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.288 18:33:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:25:54.288 18:33:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:25:54.288 18:33:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 84668 00:25:54.288 18:33:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 84668 ']' 00:25:54.288 18:33:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 84668 00:25:54.288 18:33:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:25:54.288 18:33:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:54.288 18:33:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84668 00:25:54.288 18:33:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:54.288 18:33:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:54.288 18:33:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84668' 00:25:54.288 killing process with pid 84668 00:25:54.288 18:33:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 84668 00:25:54.288 18:33:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 84668 00:25:55.667 18:33:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:25:55.667 18:33:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:55.667 18:33:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:25:55.667 18:33:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:55.667 18:33:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:25:55.667 18:33:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:55.667 18:33:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:55.667 rmmod nvme_tcp 00:25:55.667 rmmod nvme_fabrics 00:25:55.667 rmmod nvme_keyring 00:25:55.667 18:33:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:55.667 18:33:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:25:55.667 18:33:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:25:55.667 18:33:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 84636 ']' 00:25:55.667 18:33:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 84636 00:25:55.667 18:33:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 84636 ']' 00:25:55.667 18:33:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 84636 00:25:55.667 18:33:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:25:55.667 18:33:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:55.667 18:33:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84636 00:25:55.667 18:33:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:55.667 18:33:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:55.667 18:33:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84636' 00:25:55.667 killing process with pid 84636 00:25:55.667 18:33:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 84636 00:25:55.667 18:33:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 84636 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:57.160 00:25:57.160 real 0m16.661s 00:25:57.160 user 0m28.062s 00:25:57.160 sys 0m2.845s 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:57.160 ************************************ 00:25:57.160 END TEST nvmf_discovery_remove_ifc 00:25:57.160 ************************************ 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.160 ************************************ 00:25:57.160 START TEST nvmf_identify_kernel_target 00:25:57.160 ************************************ 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:57.160 * Looking for test storage... 00:25:57.160 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=1e224894-a0fc-4112-b81b-a37606f50c96 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:57.160 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:25:57.161 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:25:57.161 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:25:57.161 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:25:57.161 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:25:57.161 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:25:57.161 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:57.161 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:57.161 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:57.161 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:57.161 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:57.161 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:57.161 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:57.161 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:57.161 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:57.161 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:57.161 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:57.161 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:57.161 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:57.161 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:57.161 Cannot find device "nvmf_tgt_br" 00:25:57.161 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:25:57.161 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:57.161 Cannot find device "nvmf_tgt_br2" 00:25:57.161 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:25:57.161 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:57.161 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:57.161 Cannot find device "nvmf_tgt_br" 00:25:57.161 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:25:57.161 18:33:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:57.161 Cannot find device "nvmf_tgt_br2" 00:25:57.161 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:25:57.161 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:57.161 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:57.161 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:57.161 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:57.161 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:25:57.161 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:57.161 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:57.161 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:25:57.161 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:57.161 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:57.161 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:57.161 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:57.161 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:57.161 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:57.161 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:57.161 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:57.161 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:57.421 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:57.421 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:57.421 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:57.421 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:57.421 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:57.421 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:57.421 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:57.421 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:57.421 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:57.421 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:57.421 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:57.422 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:57.422 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:57.422 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:57.422 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:57.422 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:57.422 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:25:57.422 00:25:57.422 --- 10.0.0.2 ping statistics --- 00:25:57.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:57.422 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:25:57.422 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:57.422 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:57.422 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:25:57.422 00:25:57.422 --- 10.0.0.3 ping statistics --- 00:25:57.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:57.422 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:25:57.422 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:57.422 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:57.422 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:25:57.422 00:25:57.422 --- 10.0.0.1 ping statistics --- 00:25:57.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:57.422 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:25:57.422 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:57.422 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:25:57.422 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:57.422 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:57.422 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:57.422 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:57.422 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:57.422 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:57.422 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:57.422 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:25:57.422 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:25:57.422 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:25:57.422 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:57.422 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:57.422 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.422 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.422 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:57.422 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.422 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:57.422 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:57.422 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:57.422 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:25:57.422 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:25:57.422 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:25:57.422 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:25:57.422 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:57.422 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:57.422 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:57.422 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:25:57.422 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:25:57.422 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:25:57.422 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:57.422 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:57.681 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:57.940 Waiting for block devices as requested 00:25:57.941 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:57.941 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:57.941 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:57.941 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:57.941 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:57.941 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:25:57.941 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:57.941 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:57.941 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:57.941 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:57.941 18:33:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:25:58.200 No valid GPT data, bailing 00:25:58.200 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:58.200 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:25:58.200 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:25:58.200 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:58.200 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:58.200 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:25:58.200 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:25:58.200 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:25:58.200 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:25:58.200 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:58.200 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:25:58.200 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:25:58.200 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:25:58.200 No valid GPT data, bailing 00:25:58.200 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:25:58.200 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:25:58.200 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:25:58.200 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:25:58.200 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:58.200 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:25:58.200 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:25:58.200 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:25:58.200 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:25:58.200 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:58.200 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:25:58.200 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:25:58.200 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:25:58.200 No valid GPT data, bailing 00:25:58.200 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:25:58.200 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:25:58.200 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:25:58.200 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:25:58.200 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:58.200 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:25:58.200 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:25:58.200 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:25:58.200 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:25:58.200 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:58.200 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:25:58.200 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:25:58.200 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:25:58.459 No valid GPT data, bailing 00:25:58.459 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:25:58.459 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:25:58.459 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:25:58.459 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:25:58.459 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:25:58.459 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:58.459 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:58.459 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:58.459 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:58.459 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:25:58.459 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:25:58.459 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:25:58.459 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:25:58.459 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:25:58.459 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:25:58.459 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:25:58.460 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:58.460 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid=1e224894-a0fc-4112-b81b-a37606f50c96 -a 10.0.0.1 -t tcp -s 4420 00:25:58.460 00:25:58.460 Discovery Log Number of Records 2, Generation counter 2 00:25:58.460 =====Discovery Log Entry 0====== 00:25:58.460 trtype: tcp 00:25:58.460 adrfam: ipv4 00:25:58.460 subtype: current discovery subsystem 00:25:58.460 treq: not specified, sq flow control disable supported 00:25:58.460 portid: 1 00:25:58.460 trsvcid: 4420 00:25:58.460 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:58.460 traddr: 10.0.0.1 00:25:58.460 eflags: none 00:25:58.460 sectype: none 00:25:58.460 =====Discovery Log Entry 1====== 00:25:58.460 trtype: tcp 00:25:58.460 adrfam: ipv4 00:25:58.460 subtype: nvme subsystem 00:25:58.460 treq: not specified, sq flow control disable supported 00:25:58.460 portid: 1 00:25:58.460 trsvcid: 4420 00:25:58.460 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:58.460 traddr: 10.0.0.1 00:25:58.460 eflags: none 00:25:58.460 sectype: none 00:25:58.460 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:25:58.460 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:58.719 ===================================================== 00:25:58.719 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:58.719 ===================================================== 00:25:58.719 Controller Capabilities/Features 00:25:58.719 ================================ 00:25:58.719 Vendor ID: 0000 00:25:58.719 Subsystem Vendor ID: 0000 00:25:58.719 Serial Number: 17d00e4307882e65bf48 00:25:58.719 Model Number: Linux 00:25:58.719 Firmware Version: 6.7.0-68 00:25:58.719 Recommended Arb Burst: 0 00:25:58.719 IEEE OUI Identifier: 00 00 00 00:25:58.719 Multi-path I/O 00:25:58.719 May have multiple subsystem ports: No 00:25:58.719 May have multiple controllers: No 00:25:58.719 Associated with SR-IOV VF: No 00:25:58.719 Max Data Transfer Size: Unlimited 00:25:58.719 Max Number of Namespaces: 0 00:25:58.719 Max Number of I/O Queues: 1024 00:25:58.719 NVMe Specification Version (VS): 1.3 00:25:58.719 NVMe Specification Version (Identify): 1.3 00:25:58.719 Maximum Queue Entries: 1024 00:25:58.719 Contiguous Queues Required: No 00:25:58.719 Arbitration Mechanisms Supported 00:25:58.719 Weighted Round Robin: Not Supported 00:25:58.719 Vendor Specific: Not Supported 00:25:58.719 Reset Timeout: 7500 ms 00:25:58.719 Doorbell Stride: 4 bytes 00:25:58.719 NVM Subsystem Reset: Not Supported 00:25:58.719 Command Sets Supported 00:25:58.719 NVM Command Set: Supported 00:25:58.719 Boot Partition: Not Supported 00:25:58.720 Memory Page Size Minimum: 4096 bytes 00:25:58.720 Memory Page Size Maximum: 4096 bytes 00:25:58.720 Persistent Memory Region: Not Supported 00:25:58.720 Optional Asynchronous Events Supported 00:25:58.720 Namespace Attribute Notices: Not Supported 00:25:58.720 Firmware Activation Notices: Not Supported 00:25:58.720 ANA Change Notices: Not Supported 00:25:58.720 PLE Aggregate Log Change Notices: Not Supported 00:25:58.720 LBA Status Info Alert Notices: Not Supported 00:25:58.720 EGE Aggregate Log Change Notices: Not Supported 00:25:58.720 Normal NVM Subsystem Shutdown event: Not Supported 00:25:58.720 Zone Descriptor Change Notices: Not Supported 00:25:58.720 Discovery Log Change Notices: Supported 00:25:58.720 Controller Attributes 00:25:58.720 128-bit Host Identifier: Not Supported 00:25:58.720 Non-Operational Permissive Mode: Not Supported 00:25:58.720 NVM Sets: Not Supported 00:25:58.720 Read Recovery Levels: Not Supported 00:25:58.720 Endurance Groups: Not Supported 00:25:58.720 Predictable Latency Mode: Not Supported 00:25:58.720 Traffic Based Keep ALive: Not Supported 00:25:58.720 Namespace Granularity: Not Supported 00:25:58.720 SQ Associations: Not Supported 00:25:58.720 UUID List: Not Supported 00:25:58.720 Multi-Domain Subsystem: Not Supported 00:25:58.720 Fixed Capacity Management: Not Supported 00:25:58.720 Variable Capacity Management: Not Supported 00:25:58.720 Delete Endurance Group: Not Supported 00:25:58.720 Delete NVM Set: Not Supported 00:25:58.720 Extended LBA Formats Supported: Not Supported 00:25:58.720 Flexible Data Placement Supported: Not Supported 00:25:58.720 00:25:58.720 Controller Memory Buffer Support 00:25:58.720 ================================ 00:25:58.720 Supported: No 00:25:58.720 00:25:58.720 Persistent Memory Region Support 00:25:58.720 ================================ 00:25:58.720 Supported: No 00:25:58.720 00:25:58.720 Admin Command Set Attributes 00:25:58.720 ============================ 00:25:58.720 Security Send/Receive: Not Supported 00:25:58.720 Format NVM: Not Supported 00:25:58.720 Firmware Activate/Download: Not Supported 00:25:58.720 Namespace Management: Not Supported 00:25:58.720 Device Self-Test: Not Supported 00:25:58.720 Directives: Not Supported 00:25:58.720 NVMe-MI: Not Supported 00:25:58.720 Virtualization Management: Not Supported 00:25:58.720 Doorbell Buffer Config: Not Supported 00:25:58.720 Get LBA Status Capability: Not Supported 00:25:58.720 Command & Feature Lockdown Capability: Not Supported 00:25:58.720 Abort Command Limit: 1 00:25:58.720 Async Event Request Limit: 1 00:25:58.720 Number of Firmware Slots: N/A 00:25:58.720 Firmware Slot 1 Read-Only: N/A 00:25:58.720 Firmware Activation Without Reset: N/A 00:25:58.720 Multiple Update Detection Support: N/A 00:25:58.720 Firmware Update Granularity: No Information Provided 00:25:58.720 Per-Namespace SMART Log: No 00:25:58.720 Asymmetric Namespace Access Log Page: Not Supported 00:25:58.720 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:58.720 Command Effects Log Page: Not Supported 00:25:58.720 Get Log Page Extended Data: Supported 00:25:58.720 Telemetry Log Pages: Not Supported 00:25:58.720 Persistent Event Log Pages: Not Supported 00:25:58.720 Supported Log Pages Log Page: May Support 00:25:58.720 Commands Supported & Effects Log Page: Not Supported 00:25:58.720 Feature Identifiers & Effects Log Page:May Support 00:25:58.720 NVMe-MI Commands & Effects Log Page: May Support 00:25:58.720 Data Area 4 for Telemetry Log: Not Supported 00:25:58.720 Error Log Page Entries Supported: 1 00:25:58.720 Keep Alive: Not Supported 00:25:58.720 00:25:58.720 NVM Command Set Attributes 00:25:58.720 ========================== 00:25:58.720 Submission Queue Entry Size 00:25:58.720 Max: 1 00:25:58.720 Min: 1 00:25:58.720 Completion Queue Entry Size 00:25:58.720 Max: 1 00:25:58.720 Min: 1 00:25:58.720 Number of Namespaces: 0 00:25:58.720 Compare Command: Not Supported 00:25:58.720 Write Uncorrectable Command: Not Supported 00:25:58.720 Dataset Management Command: Not Supported 00:25:58.720 Write Zeroes Command: Not Supported 00:25:58.720 Set Features Save Field: Not Supported 00:25:58.720 Reservations: Not Supported 00:25:58.720 Timestamp: Not Supported 00:25:58.720 Copy: Not Supported 00:25:58.720 Volatile Write Cache: Not Present 00:25:58.720 Atomic Write Unit (Normal): 1 00:25:58.720 Atomic Write Unit (PFail): 1 00:25:58.720 Atomic Compare & Write Unit: 1 00:25:58.720 Fused Compare & Write: Not Supported 00:25:58.720 Scatter-Gather List 00:25:58.720 SGL Command Set: Supported 00:25:58.720 SGL Keyed: Not Supported 00:25:58.720 SGL Bit Bucket Descriptor: Not Supported 00:25:58.720 SGL Metadata Pointer: Not Supported 00:25:58.720 Oversized SGL: Not Supported 00:25:58.720 SGL Metadata Address: Not Supported 00:25:58.720 SGL Offset: Supported 00:25:58.720 Transport SGL Data Block: Not Supported 00:25:58.720 Replay Protected Memory Block: Not Supported 00:25:58.720 00:25:58.720 Firmware Slot Information 00:25:58.720 ========================= 00:25:58.720 Active slot: 0 00:25:58.720 00:25:58.720 00:25:58.720 Error Log 00:25:58.720 ========= 00:25:58.720 00:25:58.720 Active Namespaces 00:25:58.720 ================= 00:25:58.720 Discovery Log Page 00:25:58.720 ================== 00:25:58.720 Generation Counter: 2 00:25:58.720 Number of Records: 2 00:25:58.720 Record Format: 0 00:25:58.720 00:25:58.720 Discovery Log Entry 0 00:25:58.720 ---------------------- 00:25:58.720 Transport Type: 3 (TCP) 00:25:58.720 Address Family: 1 (IPv4) 00:25:58.720 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:58.720 Entry Flags: 00:25:58.720 Duplicate Returned Information: 0 00:25:58.720 Explicit Persistent Connection Support for Discovery: 0 00:25:58.720 Transport Requirements: 00:25:58.720 Secure Channel: Not Specified 00:25:58.720 Port ID: 1 (0x0001) 00:25:58.720 Controller ID: 65535 (0xffff) 00:25:58.720 Admin Max SQ Size: 32 00:25:58.720 Transport Service Identifier: 4420 00:25:58.720 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:58.720 Transport Address: 10.0.0.1 00:25:58.720 Discovery Log Entry 1 00:25:58.720 ---------------------- 00:25:58.720 Transport Type: 3 (TCP) 00:25:58.720 Address Family: 1 (IPv4) 00:25:58.720 Subsystem Type: 2 (NVM Subsystem) 00:25:58.720 Entry Flags: 00:25:58.720 Duplicate Returned Information: 0 00:25:58.720 Explicit Persistent Connection Support for Discovery: 0 00:25:58.720 Transport Requirements: 00:25:58.720 Secure Channel: Not Specified 00:25:58.720 Port ID: 1 (0x0001) 00:25:58.720 Controller ID: 65535 (0xffff) 00:25:58.720 Admin Max SQ Size: 32 00:25:58.720 Transport Service Identifier: 4420 00:25:58.720 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:58.720 Transport Address: 10.0.0.1 00:25:58.720 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:58.980 get_feature(0x01) failed 00:25:58.980 get_feature(0x02) failed 00:25:58.980 get_feature(0x04) failed 00:25:58.980 ===================================================== 00:25:58.980 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:58.980 ===================================================== 00:25:58.980 Controller Capabilities/Features 00:25:58.980 ================================ 00:25:58.980 Vendor ID: 0000 00:25:58.980 Subsystem Vendor ID: 0000 00:25:58.980 Serial Number: d2de936179bc4a4661ad 00:25:58.980 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:58.980 Firmware Version: 6.7.0-68 00:25:58.980 Recommended Arb Burst: 6 00:25:58.980 IEEE OUI Identifier: 00 00 00 00:25:58.980 Multi-path I/O 00:25:58.980 May have multiple subsystem ports: Yes 00:25:58.980 May have multiple controllers: Yes 00:25:58.980 Associated with SR-IOV VF: No 00:25:58.980 Max Data Transfer Size: Unlimited 00:25:58.980 Max Number of Namespaces: 1024 00:25:58.980 Max Number of I/O Queues: 128 00:25:58.980 NVMe Specification Version (VS): 1.3 00:25:58.980 NVMe Specification Version (Identify): 1.3 00:25:58.980 Maximum Queue Entries: 1024 00:25:58.980 Contiguous Queues Required: No 00:25:58.980 Arbitration Mechanisms Supported 00:25:58.980 Weighted Round Robin: Not Supported 00:25:58.980 Vendor Specific: Not Supported 00:25:58.980 Reset Timeout: 7500 ms 00:25:58.980 Doorbell Stride: 4 bytes 00:25:58.980 NVM Subsystem Reset: Not Supported 00:25:58.980 Command Sets Supported 00:25:58.980 NVM Command Set: Supported 00:25:58.980 Boot Partition: Not Supported 00:25:58.980 Memory Page Size Minimum: 4096 bytes 00:25:58.980 Memory Page Size Maximum: 4096 bytes 00:25:58.980 Persistent Memory Region: Not Supported 00:25:58.980 Optional Asynchronous Events Supported 00:25:58.980 Namespace Attribute Notices: Supported 00:25:58.980 Firmware Activation Notices: Not Supported 00:25:58.980 ANA Change Notices: Supported 00:25:58.980 PLE Aggregate Log Change Notices: Not Supported 00:25:58.980 LBA Status Info Alert Notices: Not Supported 00:25:58.980 EGE Aggregate Log Change Notices: Not Supported 00:25:58.980 Normal NVM Subsystem Shutdown event: Not Supported 00:25:58.980 Zone Descriptor Change Notices: Not Supported 00:25:58.980 Discovery Log Change Notices: Not Supported 00:25:58.980 Controller Attributes 00:25:58.980 128-bit Host Identifier: Supported 00:25:58.980 Non-Operational Permissive Mode: Not Supported 00:25:58.980 NVM Sets: Not Supported 00:25:58.980 Read Recovery Levels: Not Supported 00:25:58.980 Endurance Groups: Not Supported 00:25:58.980 Predictable Latency Mode: Not Supported 00:25:58.980 Traffic Based Keep ALive: Supported 00:25:58.980 Namespace Granularity: Not Supported 00:25:58.980 SQ Associations: Not Supported 00:25:58.980 UUID List: Not Supported 00:25:58.980 Multi-Domain Subsystem: Not Supported 00:25:58.980 Fixed Capacity Management: Not Supported 00:25:58.980 Variable Capacity Management: Not Supported 00:25:58.980 Delete Endurance Group: Not Supported 00:25:58.980 Delete NVM Set: Not Supported 00:25:58.980 Extended LBA Formats Supported: Not Supported 00:25:58.980 Flexible Data Placement Supported: Not Supported 00:25:58.980 00:25:58.980 Controller Memory Buffer Support 00:25:58.980 ================================ 00:25:58.980 Supported: No 00:25:58.980 00:25:58.980 Persistent Memory Region Support 00:25:58.980 ================================ 00:25:58.980 Supported: No 00:25:58.980 00:25:58.980 Admin Command Set Attributes 00:25:58.980 ============================ 00:25:58.980 Security Send/Receive: Not Supported 00:25:58.980 Format NVM: Not Supported 00:25:58.980 Firmware Activate/Download: Not Supported 00:25:58.980 Namespace Management: Not Supported 00:25:58.980 Device Self-Test: Not Supported 00:25:58.980 Directives: Not Supported 00:25:58.980 NVMe-MI: Not Supported 00:25:58.980 Virtualization Management: Not Supported 00:25:58.980 Doorbell Buffer Config: Not Supported 00:25:58.980 Get LBA Status Capability: Not Supported 00:25:58.980 Command & Feature Lockdown Capability: Not Supported 00:25:58.980 Abort Command Limit: 4 00:25:58.980 Async Event Request Limit: 4 00:25:58.980 Number of Firmware Slots: N/A 00:25:58.980 Firmware Slot 1 Read-Only: N/A 00:25:58.980 Firmware Activation Without Reset: N/A 00:25:58.980 Multiple Update Detection Support: N/A 00:25:58.980 Firmware Update Granularity: No Information Provided 00:25:58.980 Per-Namespace SMART Log: Yes 00:25:58.981 Asymmetric Namespace Access Log Page: Supported 00:25:58.981 ANA Transition Time : 10 sec 00:25:58.981 00:25:58.981 Asymmetric Namespace Access Capabilities 00:25:58.981 ANA Optimized State : Supported 00:25:58.981 ANA Non-Optimized State : Supported 00:25:58.981 ANA Inaccessible State : Supported 00:25:58.981 ANA Persistent Loss State : Supported 00:25:58.981 ANA Change State : Supported 00:25:58.981 ANAGRPID is not changed : No 00:25:58.981 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:58.981 00:25:58.981 ANA Group Identifier Maximum : 128 00:25:58.981 Number of ANA Group Identifiers : 128 00:25:58.981 Max Number of Allowed Namespaces : 1024 00:25:58.981 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:58.981 Command Effects Log Page: Supported 00:25:58.981 Get Log Page Extended Data: Supported 00:25:58.981 Telemetry Log Pages: Not Supported 00:25:58.981 Persistent Event Log Pages: Not Supported 00:25:58.981 Supported Log Pages Log Page: May Support 00:25:58.981 Commands Supported & Effects Log Page: Not Supported 00:25:58.981 Feature Identifiers & Effects Log Page:May Support 00:25:58.981 NVMe-MI Commands & Effects Log Page: May Support 00:25:58.981 Data Area 4 for Telemetry Log: Not Supported 00:25:58.981 Error Log Page Entries Supported: 128 00:25:58.981 Keep Alive: Supported 00:25:58.981 Keep Alive Granularity: 1000 ms 00:25:58.981 00:25:58.981 NVM Command Set Attributes 00:25:58.981 ========================== 00:25:58.981 Submission Queue Entry Size 00:25:58.981 Max: 64 00:25:58.981 Min: 64 00:25:58.981 Completion Queue Entry Size 00:25:58.981 Max: 16 00:25:58.981 Min: 16 00:25:58.981 Number of Namespaces: 1024 00:25:58.981 Compare Command: Not Supported 00:25:58.981 Write Uncorrectable Command: Not Supported 00:25:58.981 Dataset Management Command: Supported 00:25:58.981 Write Zeroes Command: Supported 00:25:58.981 Set Features Save Field: Not Supported 00:25:58.981 Reservations: Not Supported 00:25:58.981 Timestamp: Not Supported 00:25:58.981 Copy: Not Supported 00:25:58.981 Volatile Write Cache: Present 00:25:58.981 Atomic Write Unit (Normal): 1 00:25:58.981 Atomic Write Unit (PFail): 1 00:25:58.981 Atomic Compare & Write Unit: 1 00:25:58.981 Fused Compare & Write: Not Supported 00:25:58.981 Scatter-Gather List 00:25:58.981 SGL Command Set: Supported 00:25:58.981 SGL Keyed: Not Supported 00:25:58.981 SGL Bit Bucket Descriptor: Not Supported 00:25:58.981 SGL Metadata Pointer: Not Supported 00:25:58.981 Oversized SGL: Not Supported 00:25:58.981 SGL Metadata Address: Not Supported 00:25:58.981 SGL Offset: Supported 00:25:58.981 Transport SGL Data Block: Not Supported 00:25:58.981 Replay Protected Memory Block: Not Supported 00:25:58.981 00:25:58.981 Firmware Slot Information 00:25:58.981 ========================= 00:25:58.981 Active slot: 0 00:25:58.981 00:25:58.981 Asymmetric Namespace Access 00:25:58.981 =========================== 00:25:58.981 Change Count : 0 00:25:58.981 Number of ANA Group Descriptors : 1 00:25:58.981 ANA Group Descriptor : 0 00:25:58.981 ANA Group ID : 1 00:25:58.981 Number of NSID Values : 1 00:25:58.981 Change Count : 0 00:25:58.981 ANA State : 1 00:25:58.981 Namespace Identifier : 1 00:25:58.981 00:25:58.981 Commands Supported and Effects 00:25:58.981 ============================== 00:25:58.981 Admin Commands 00:25:58.981 -------------- 00:25:58.981 Get Log Page (02h): Supported 00:25:58.981 Identify (06h): Supported 00:25:58.981 Abort (08h): Supported 00:25:58.981 Set Features (09h): Supported 00:25:58.981 Get Features (0Ah): Supported 00:25:58.981 Asynchronous Event Request (0Ch): Supported 00:25:58.981 Keep Alive (18h): Supported 00:25:58.981 I/O Commands 00:25:58.981 ------------ 00:25:58.981 Flush (00h): Supported 00:25:58.981 Write (01h): Supported LBA-Change 00:25:58.981 Read (02h): Supported 00:25:58.981 Write Zeroes (08h): Supported LBA-Change 00:25:58.981 Dataset Management (09h): Supported 00:25:58.981 00:25:58.981 Error Log 00:25:58.981 ========= 00:25:58.981 Entry: 0 00:25:58.981 Error Count: 0x3 00:25:58.981 Submission Queue Id: 0x0 00:25:58.981 Command Id: 0x5 00:25:58.981 Phase Bit: 0 00:25:58.981 Status Code: 0x2 00:25:58.981 Status Code Type: 0x0 00:25:58.981 Do Not Retry: 1 00:25:58.981 Error Location: 0x28 00:25:58.981 LBA: 0x0 00:25:58.981 Namespace: 0x0 00:25:58.981 Vendor Log Page: 0x0 00:25:58.981 ----------- 00:25:58.981 Entry: 1 00:25:58.981 Error Count: 0x2 00:25:58.981 Submission Queue Id: 0x0 00:25:58.981 Command Id: 0x5 00:25:58.981 Phase Bit: 0 00:25:58.981 Status Code: 0x2 00:25:58.981 Status Code Type: 0x0 00:25:58.981 Do Not Retry: 1 00:25:58.981 Error Location: 0x28 00:25:58.981 LBA: 0x0 00:25:58.981 Namespace: 0x0 00:25:58.981 Vendor Log Page: 0x0 00:25:58.981 ----------- 00:25:58.981 Entry: 2 00:25:58.981 Error Count: 0x1 00:25:58.981 Submission Queue Id: 0x0 00:25:58.981 Command Id: 0x4 00:25:58.981 Phase Bit: 0 00:25:58.981 Status Code: 0x2 00:25:58.981 Status Code Type: 0x0 00:25:58.981 Do Not Retry: 1 00:25:58.981 Error Location: 0x28 00:25:58.981 LBA: 0x0 00:25:58.981 Namespace: 0x0 00:25:58.981 Vendor Log Page: 0x0 00:25:58.981 00:25:58.981 Number of Queues 00:25:58.981 ================ 00:25:58.981 Number of I/O Submission Queues: 128 00:25:58.981 Number of I/O Completion Queues: 128 00:25:58.981 00:25:58.981 ZNS Specific Controller Data 00:25:58.981 ============================ 00:25:58.981 Zone Append Size Limit: 0 00:25:58.981 00:25:58.981 00:25:58.981 Active Namespaces 00:25:58.981 ================= 00:25:58.981 get_feature(0x05) failed 00:25:58.981 Namespace ID:1 00:25:58.981 Command Set Identifier: NVM (00h) 00:25:58.981 Deallocate: Supported 00:25:58.981 Deallocated/Unwritten Error: Not Supported 00:25:58.981 Deallocated Read Value: Unknown 00:25:58.981 Deallocate in Write Zeroes: Not Supported 00:25:58.981 Deallocated Guard Field: 0xFFFF 00:25:58.981 Flush: Supported 00:25:58.981 Reservation: Not Supported 00:25:58.981 Namespace Sharing Capabilities: Multiple Controllers 00:25:58.981 Size (in LBAs): 1310720 (5GiB) 00:25:58.981 Capacity (in LBAs): 1310720 (5GiB) 00:25:58.981 Utilization (in LBAs): 1310720 (5GiB) 00:25:58.981 UUID: 95c716be-23c1-4bd9-81c2-0e70f9d63c28 00:25:58.981 Thin Provisioning: Not Supported 00:25:58.981 Per-NS Atomic Units: Yes 00:25:58.981 Atomic Boundary Size (Normal): 0 00:25:58.981 Atomic Boundary Size (PFail): 0 00:25:58.981 Atomic Boundary Offset: 0 00:25:58.981 NGUID/EUI64 Never Reused: No 00:25:58.981 ANA group ID: 1 00:25:58.981 Namespace Write Protected: No 00:25:58.981 Number of LBA Formats: 1 00:25:58.981 Current LBA Format: LBA Format #00 00:25:58.982 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:25:58.982 00:25:58.982 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:58.982 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:58.982 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:25:58.982 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:58.982 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:25:58.982 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:58.982 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:58.982 rmmod nvme_tcp 00:25:58.982 rmmod nvme_fabrics 00:25:58.982 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:58.982 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:25:58.982 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:25:58.982 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:25:58.982 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:58.982 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:58.982 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:58.982 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:58.982 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:58.982 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:58.982 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:58.982 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:58.982 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:58.982 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:58.982 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:58.982 18:33:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:25:59.240 18:33:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:59.240 18:33:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:59.240 18:33:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:59.240 18:33:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:59.240 18:33:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:25:59.240 18:33:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:25:59.240 18:33:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:59.808 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:59.808 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:26:00.067 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:26:00.067 ************************************ 00:26:00.067 END TEST nvmf_identify_kernel_target 00:26:00.067 ************************************ 00:26:00.067 00:26:00.067 real 0m3.066s 00:26:00.067 user 0m1.084s 00:26:00.067 sys 0m1.455s 00:26:00.067 18:33:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:00.067 18:33:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:00.067 18:33:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:26:00.067 18:33:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:00.067 18:33:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:00.067 18:33:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:00.067 18:33:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.067 ************************************ 00:26:00.067 START TEST nvmf_auth_host 00:26:00.067 ************************************ 00:26:00.067 18:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:00.067 * Looking for test storage... 00:26:00.067 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:00.067 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:00.067 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:26:00.067 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:00.067 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:00.067 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:00.067 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:00.067 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:00.067 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:00.067 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:00.067 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:00.067 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:00.067 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:00.067 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:26:00.067 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=1e224894-a0fc-4112-b81b-a37606f50c96 00:26:00.067 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:00.067 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:00.067 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:00.067 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:00.067 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:00.067 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:00.067 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:00.067 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:00.068 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.068 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.068 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.068 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:26:00.068 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.068 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:26:00.068 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:00.068 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:00.068 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:00.068 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:00.068 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:00.068 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:00.068 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:00.068 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:00.068 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:26:00.068 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:26:00.068 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:26:00.068 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:26:00.068 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:00.068 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:00.068 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:26:00.068 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:26:00.068 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:26:00.068 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:00.068 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:00.068 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:00.068 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:00.068 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:00.068 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:00.068 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:00.068 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:00.068 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:26:00.068 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:26:00.068 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:26:00.068 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:26:00.068 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:26:00.068 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:26:00.068 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:00.068 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:00.068 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:00.068 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:00.068 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:00.068 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:00.068 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:00.068 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:00.068 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:00.068 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:00.068 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:00.068 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:00.068 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:00.327 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:00.327 Cannot find device "nvmf_tgt_br" 00:26:00.327 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:26:00.327 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:00.327 Cannot find device "nvmf_tgt_br2" 00:26:00.327 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:26:00.327 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:00.327 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:00.327 Cannot find device "nvmf_tgt_br" 00:26:00.327 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:26:00.327 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:00.327 Cannot find device "nvmf_tgt_br2" 00:26:00.327 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:26:00.327 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:00.327 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:00.327 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:00.327 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:00.327 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:26:00.327 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:00.327 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:00.327 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:26:00.327 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:00.327 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:00.327 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:00.327 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:00.327 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:00.327 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:00.327 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:00.327 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:00.327 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:00.327 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:00.327 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:00.327 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:00.327 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:00.327 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:00.327 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:00.327 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:00.327 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:00.327 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:00.327 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:00.586 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:00.586 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:00.586 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:00.586 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:00.586 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:00.586 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:00.586 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:26:00.586 00:26:00.586 --- 10.0.0.2 ping statistics --- 00:26:00.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:00.586 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:26:00.586 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:00.586 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:00.586 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:26:00.586 00:26:00.586 --- 10.0.0.3 ping statistics --- 00:26:00.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:00.586 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:26:00.586 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:00.586 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:00.586 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:26:00.586 00:26:00.586 --- 10.0.0.1 ping statistics --- 00:26:00.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:00.586 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:26:00.586 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:00.586 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:26:00.586 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:00.586 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:00.586 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:00.586 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:00.586 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:00.586 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:00.586 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:00.586 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:26:00.586 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:00.586 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:00.586 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.586 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=85585 00:26:00.586 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 85585 00:26:00.586 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:26:00.586 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 85585 ']' 00:26:00.586 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:00.586 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:00.586 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:00.587 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:00.587 18:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.531 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:01.531 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:26:01.531 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:01.531 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:01.531 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.531 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:01.531 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=bfedd8f83a79b4040f16d458a7847e53 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.hDp 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key bfedd8f83a79b4040f16d458a7847e53 0 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 bfedd8f83a79b4040f16d458a7847e53 0 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=bfedd8f83a79b4040f16d458a7847e53 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.hDp 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.hDp 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.hDp 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1504b74f765fe20d2a644847b906161cbf48c0ac8c866b9e991c667c5f474ddd 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.os5 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1504b74f765fe20d2a644847b906161cbf48c0ac8c866b9e991c667c5f474ddd 3 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1504b74f765fe20d2a644847b906161cbf48c0ac8c866b9e991c667c5f474ddd 3 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1504b74f765fe20d2a644847b906161cbf48c0ac8c866b9e991c667c5f474ddd 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.os5 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.os5 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.os5 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7401f8db32d08c2d861eb51b7dc46f267f7f7654c1da34ad 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Q4Q 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7401f8db32d08c2d861eb51b7dc46f267f7f7654c1da34ad 0 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7401f8db32d08c2d861eb51b7dc46f267f7f7654c1da34ad 0 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7401f8db32d08c2d861eb51b7dc46f267f7f7654c1da34ad 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Q4Q 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Q4Q 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Q4Q 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e19360119889e4476e9a448f2a6354f2fa57225188cc0839 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Hen 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e19360119889e4476e9a448f2a6354f2fa57225188cc0839 2 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e19360119889e4476e9a448f2a6354f2fa57225188cc0839 2 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e19360119889e4476e9a448f2a6354f2fa57225188cc0839 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:26:01.790 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:02.050 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Hen 00:26:02.050 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Hen 00:26:02.050 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Hen 00:26:02.050 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:02.050 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:02.050 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:02.050 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:02.050 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:26:02.050 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:02.050 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:02.050 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6aab08be49eaca6f4f3fcb4ea98a293c 00:26:02.050 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:26:02.050 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Gud 00:26:02.050 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6aab08be49eaca6f4f3fcb4ea98a293c 1 00:26:02.050 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6aab08be49eaca6f4f3fcb4ea98a293c 1 00:26:02.050 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:02.050 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:02.050 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6aab08be49eaca6f4f3fcb4ea98a293c 00:26:02.050 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:26:02.050 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:02.050 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Gud 00:26:02.050 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Gud 00:26:02.050 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Gud 00:26:02.050 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:02.050 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:02.050 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:02.050 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:02.050 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:26:02.050 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:02.050 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:02.050 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7822c1505bf1c752f1da2d3704454178 00:26:02.050 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:26:02.051 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.OtI 00:26:02.051 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7822c1505bf1c752f1da2d3704454178 1 00:26:02.051 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7822c1505bf1c752f1da2d3704454178 1 00:26:02.051 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:02.051 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:02.051 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7822c1505bf1c752f1da2d3704454178 00:26:02.051 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:26:02.051 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:02.051 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.OtI 00:26:02.051 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.OtI 00:26:02.051 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.OtI 00:26:02.051 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:26:02.051 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:02.051 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:02.051 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:02.051 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:26:02.051 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:26:02.051 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:02.051 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=402723bfec499340531c81f2a5803bb4963889342c31d9ac 00:26:02.051 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:26:02.051 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.VPc 00:26:02.051 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 402723bfec499340531c81f2a5803bb4963889342c31d9ac 2 00:26:02.051 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 402723bfec499340531c81f2a5803bb4963889342c31d9ac 2 00:26:02.051 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:02.051 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:02.051 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=402723bfec499340531c81f2a5803bb4963889342c31d9ac 00:26:02.051 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:26:02.051 18:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:02.051 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.VPc 00:26:02.051 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.VPc 00:26:02.051 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.VPc 00:26:02.051 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:26:02.051 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:02.051 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:02.051 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:02.051 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:26:02.051 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:02.051 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:02.051 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=40d5e618e318d87e0bd341e58133b9b1 00:26:02.051 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:26:02.051 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.04w 00:26:02.051 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 40d5e618e318d87e0bd341e58133b9b1 0 00:26:02.051 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 40d5e618e318d87e0bd341e58133b9b1 0 00:26:02.051 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:02.051 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:02.051 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=40d5e618e318d87e0bd341e58133b9b1 00:26:02.051 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:26:02.051 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:02.310 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.04w 00:26:02.310 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.04w 00:26:02.310 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.04w 00:26:02.310 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:26:02.310 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:02.310 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:02.310 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:02.310 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:26:02.310 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:26:02.310 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:02.310 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ab2e00ce881207e3486c722dbcca04fdfd62cc7502c448319f105340cbd21803 00:26:02.310 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:26:02.310 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.gbN 00:26:02.310 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ab2e00ce881207e3486c722dbcca04fdfd62cc7502c448319f105340cbd21803 3 00:26:02.310 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ab2e00ce881207e3486c722dbcca04fdfd62cc7502c448319f105340cbd21803 3 00:26:02.310 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:02.310 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:02.310 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ab2e00ce881207e3486c722dbcca04fdfd62cc7502c448319f105340cbd21803 00:26:02.310 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:26:02.310 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:02.310 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.gbN 00:26:02.310 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.gbN 00:26:02.310 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.gbN 00:26:02.310 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:26:02.310 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 85585 00:26:02.310 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 85585 ']' 00:26:02.310 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:02.310 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:02.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:02.310 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:02.310 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:02.310 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.569 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:02.569 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:26:02.569 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:02.569 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.hDp 00:26:02.569 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.569 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.569 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.569 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.os5 ]] 00:26:02.569 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.os5 00:26:02.569 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.569 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.569 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.569 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:02.569 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Q4Q 00:26:02.569 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.569 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.569 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.569 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Hen ]] 00:26:02.569 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Hen 00:26:02.569 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.569 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.569 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.569 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:02.569 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Gud 00:26:02.569 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.569 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.569 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.569 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.OtI ]] 00:26:02.569 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.OtI 00:26:02.569 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.569 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.569 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.569 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:02.569 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.VPc 00:26:02.569 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.569 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.569 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.569 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.04w ]] 00:26:02.569 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.04w 00:26:02.569 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.569 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.569 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.569 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:02.569 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.gbN 00:26:02.569 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.569 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.569 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.569 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:26:02.569 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:26:02.569 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:26:02.570 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:02.570 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:02.570 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:02.570 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.570 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.570 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:02.570 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.570 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:02.570 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:02.570 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:02.570 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:26:02.570 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:26:02.570 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:26:02.570 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:02.570 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:02.570 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:02.570 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:26:02.570 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:26:02.570 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:26:02.570 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:02.570 18:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:03.137 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:03.137 Waiting for block devices as requested 00:26:03.137 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:26:03.137 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:26:03.733 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:03.733 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:03.733 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:26:03.733 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:26:03.733 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:03.733 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:26:03.733 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:26:03.733 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:26:03.733 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:26:03.733 No valid GPT data, bailing 00:26:03.733 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:03.733 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:26:03.733 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:26:03.733 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:26:03.733 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:03.733 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:26:03.733 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:26:03.733 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:26:03.733 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:26:03.733 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:26:03.733 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:26:03.733 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:26:03.733 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:26:03.993 No valid GPT data, bailing 00:26:03.993 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:26:03.993 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:26:03.993 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:26:03.993 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:26:03.993 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:03.993 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:26:03.993 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:26:03.993 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:26:03.993 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:26:03.993 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:26:03.993 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:26:03.993 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:26:03.993 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:26:03.993 No valid GPT data, bailing 00:26:03.993 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:26:03.993 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:26:03.993 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:26:03.993 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:26:03.993 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:03.993 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:26:03.993 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:26:03.993 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:26:03.993 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:26:03.993 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:26:03.993 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:26:03.993 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:26:03.993 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:26:03.993 No valid GPT data, bailing 00:26:03.993 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:26:03.993 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:26:03.993 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:26:03.993 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:26:03.993 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:26:03.993 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:03.993 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:03.993 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:03.993 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:26:03.993 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:26:03.993 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:26:03.993 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:26:03.993 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:26:03.993 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:26:03.993 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:26:03.993 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:26:03.993 18:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:03.993 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid=1e224894-a0fc-4112-b81b-a37606f50c96 -a 10.0.0.1 -t tcp -s 4420 00:26:04.253 00:26:04.253 Discovery Log Number of Records 2, Generation counter 2 00:26:04.253 =====Discovery Log Entry 0====== 00:26:04.253 trtype: tcp 00:26:04.253 adrfam: ipv4 00:26:04.253 subtype: current discovery subsystem 00:26:04.253 treq: not specified, sq flow control disable supported 00:26:04.253 portid: 1 00:26:04.253 trsvcid: 4420 00:26:04.253 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:04.253 traddr: 10.0.0.1 00:26:04.253 eflags: none 00:26:04.253 sectype: none 00:26:04.253 =====Discovery Log Entry 1====== 00:26:04.253 trtype: tcp 00:26:04.253 adrfam: ipv4 00:26:04.253 subtype: nvme subsystem 00:26:04.253 treq: not specified, sq flow control disable supported 00:26:04.253 portid: 1 00:26:04.253 trsvcid: 4420 00:26:04.253 subnqn: nqn.2024-02.io.spdk:cnode0 00:26:04.253 traddr: 10.0.0.1 00:26:04.253 eflags: none 00:26:04.253 sectype: none 00:26:04.253 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:04.253 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:26:04.253 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:04.253 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:04.253 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.253 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:04.253 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:04.253 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:04.253 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzQwMWY4ZGIzMmQwOGMyZDg2MWViNTFiN2RjNDZmMjY3ZjdmNzY1NGMxZGEzNGFkY0VPNQ==: 00:26:04.253 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTE5MzYwMTE5ODg5ZTQ0NzZlOWE0NDhmMmE2MzU0ZjJmYTU3MjI1MTg4Y2MwODM5LkR7+Q==: 00:26:04.253 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:04.253 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:04.253 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzQwMWY4ZGIzMmQwOGMyZDg2MWViNTFiN2RjNDZmMjY3ZjdmNzY1NGMxZGEzNGFkY0VPNQ==: 00:26:04.253 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTE5MzYwMTE5ODg5ZTQ0NzZlOWE0NDhmMmE2MzU0ZjJmYTU3MjI1MTg4Y2MwODM5LkR7+Q==: ]] 00:26:04.253 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTE5MzYwMTE5ODg5ZTQ0NzZlOWE0NDhmMmE2MzU0ZjJmYTU3MjI1MTg4Y2MwODM5LkR7+Q==: 00:26:04.253 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:04.253 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:26:04.253 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:04.253 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:04.253 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:26:04.253 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.253 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:26:04.253 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:04.253 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:04.253 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.253 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:04.253 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.253 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.253 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.253 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.253 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:04.253 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:04.253 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:04.253 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.253 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.253 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:04.253 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.253 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:04.253 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:04.253 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:04.253 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:04.253 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.253 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.512 nvme0n1 00:26:04.512 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.512 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.512 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.512 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.512 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.512 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.513 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.513 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.513 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.513 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.513 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.513 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:04.513 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:04.513 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.513 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:26:04.513 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.513 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:04.513 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:04.513 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:04.513 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmZlZGQ4ZjgzYTc5YjQwNDBmMTZkNDU4YTc4NDdlNTP17YJy: 00:26:04.513 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTUwNGI3NGY3NjVmZTIwZDJhNjQ0ODQ3YjkwNjE2MWNiZjQ4YzBhYzhjODY2YjllOTkxYzY2N2M1ZjQ3NGRkZBMSeHA=: 00:26:04.513 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:04.513 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:04.513 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmZlZGQ4ZjgzYTc5YjQwNDBmMTZkNDU4YTc4NDdlNTP17YJy: 00:26:04.513 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTUwNGI3NGY3NjVmZTIwZDJhNjQ0ODQ3YjkwNjE2MWNiZjQ4YzBhYzhjODY2YjllOTkxYzY2N2M1ZjQ3NGRkZBMSeHA=: ]] 00:26:04.513 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTUwNGI3NGY3NjVmZTIwZDJhNjQ0ODQ3YjkwNjE2MWNiZjQ4YzBhYzhjODY2YjllOTkxYzY2N2M1ZjQ3NGRkZBMSeHA=: 00:26:04.513 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:26:04.513 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.513 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:04.513 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:04.513 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:04.513 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.513 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:04.513 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.513 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.513 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.513 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.513 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:04.513 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:04.513 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:04.513 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.513 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.513 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:04.513 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.513 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:04.513 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:04.513 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:04.513 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:04.513 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.513 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.513 nvme0n1 00:26:04.513 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.513 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.513 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.513 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.513 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.513 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.773 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.773 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.773 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.773 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.773 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.773 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.773 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:04.773 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.773 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:04.773 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:04.773 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:04.773 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzQwMWY4ZGIzMmQwOGMyZDg2MWViNTFiN2RjNDZmMjY3ZjdmNzY1NGMxZGEzNGFkY0VPNQ==: 00:26:04.773 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTE5MzYwMTE5ODg5ZTQ0NzZlOWE0NDhmMmE2MzU0ZjJmYTU3MjI1MTg4Y2MwODM5LkR7+Q==: 00:26:04.773 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:04.773 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:04.773 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzQwMWY4ZGIzMmQwOGMyZDg2MWViNTFiN2RjNDZmMjY3ZjdmNzY1NGMxZGEzNGFkY0VPNQ==: 00:26:04.773 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTE5MzYwMTE5ODg5ZTQ0NzZlOWE0NDhmMmE2MzU0ZjJmYTU3MjI1MTg4Y2MwODM5LkR7+Q==: ]] 00:26:04.773 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTE5MzYwMTE5ODg5ZTQ0NzZlOWE0NDhmMmE2MzU0ZjJmYTU3MjI1MTg4Y2MwODM5LkR7+Q==: 00:26:04.773 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:26:04.773 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.773 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:04.773 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:04.773 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:04.773 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.773 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:04.773 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.773 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.773 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.773 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.773 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:04.773 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:04.773 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:04.773 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.773 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.773 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:04.773 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.773 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:04.774 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:04.774 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:04.774 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:04.774 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.774 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.774 nvme0n1 00:26:04.774 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.774 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.774 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.774 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.774 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.774 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.774 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.774 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.774 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.774 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.774 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.774 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.774 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:04.774 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.774 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:04.774 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:04.774 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:04.774 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmFhYjA4YmU0OWVhY2E2ZjRmM2ZjYjRlYTk4YTI5M2N4L+7k: 00:26:04.774 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzgyMmMxNTA1YmYxYzc1MmYxZGEyZDM3MDQ0NTQxNzjEDxuC: 00:26:04.774 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:04.774 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:04.774 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmFhYjA4YmU0OWVhY2E2ZjRmM2ZjYjRlYTk4YTI5M2N4L+7k: 00:26:04.774 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzgyMmMxNTA1YmYxYzc1MmYxZGEyZDM3MDQ0NTQxNzjEDxuC: ]] 00:26:04.774 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzgyMmMxNTA1YmYxYzc1MmYxZGEyZDM3MDQ0NTQxNzjEDxuC: 00:26:04.774 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:26:04.774 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.774 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:04.774 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:04.774 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:04.774 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.774 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:04.774 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.774 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.774 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.774 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.774 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:04.774 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:04.774 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:04.774 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.774 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.774 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:04.774 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.774 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:04.774 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:04.774 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:04.774 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:04.774 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.774 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.033 nvme0n1 00:26:05.033 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.033 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.033 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.033 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.033 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.033 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.033 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.033 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.033 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.033 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.033 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.033 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.033 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:26:05.033 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.033 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:05.033 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:05.033 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:05.033 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDAyNzIzYmZlYzQ5OTM0MDUzMWM4MWYyYTU4MDNiYjQ5NjM4ODkzNDJjMzFkOWFjFhhVaw==: 00:26:05.033 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDBkNWU2MThlMzE4ZDg3ZTBiZDM0MWU1ODEzM2I5YjEBNVUV: 00:26:05.034 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:05.034 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:05.034 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDAyNzIzYmZlYzQ5OTM0MDUzMWM4MWYyYTU4MDNiYjQ5NjM4ODkzNDJjMzFkOWFjFhhVaw==: 00:26:05.034 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDBkNWU2MThlMzE4ZDg3ZTBiZDM0MWU1ODEzM2I5YjEBNVUV: ]] 00:26:05.034 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDBkNWU2MThlMzE4ZDg3ZTBiZDM0MWU1ODEzM2I5YjEBNVUV: 00:26:05.034 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:26:05.034 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.034 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:05.034 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:05.034 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:05.034 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.034 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:05.034 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.034 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.034 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.034 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.034 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:05.034 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:05.034 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:05.034 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.034 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.034 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:05.034 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.034 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:05.034 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:05.034 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:05.034 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:05.034 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.034 18:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.034 nvme0n1 00:26:05.034 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.034 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.034 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.034 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.034 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWIyZTAwY2U4ODEyMDdlMzQ4NmM3MjJkYmNjYTA0ZmRmZDYyY2M3NTAyYzQ0ODMxOWYxMDUzNDBjYmQyMTgwM9hyp4w=: 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWIyZTAwY2U4ODEyMDdlMzQ4NmM3MjJkYmNjYTA0ZmRmZDYyY2M3NTAyYzQ0ODMxOWYxMDUzNDBjYmQyMTgwM9hyp4w=: 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.294 nvme0n1 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmZlZGQ4ZjgzYTc5YjQwNDBmMTZkNDU4YTc4NDdlNTP17YJy: 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTUwNGI3NGY3NjVmZTIwZDJhNjQ0ODQ3YjkwNjE2MWNiZjQ4YzBhYzhjODY2YjllOTkxYzY2N2M1ZjQ3NGRkZBMSeHA=: 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:05.294 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmZlZGQ4ZjgzYTc5YjQwNDBmMTZkNDU4YTc4NDdlNTP17YJy: 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTUwNGI3NGY3NjVmZTIwZDJhNjQ0ODQ3YjkwNjE2MWNiZjQ4YzBhYzhjODY2YjllOTkxYzY2N2M1ZjQ3NGRkZBMSeHA=: ]] 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTUwNGI3NGY3NjVmZTIwZDJhNjQ0ODQ3YjkwNjE2MWNiZjQ4YzBhYzhjODY2YjllOTkxYzY2N2M1ZjQ3NGRkZBMSeHA=: 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.863 nvme0n1 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzQwMWY4ZGIzMmQwOGMyZDg2MWViNTFiN2RjNDZmMjY3ZjdmNzY1NGMxZGEzNGFkY0VPNQ==: 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTE5MzYwMTE5ODg5ZTQ0NzZlOWE0NDhmMmE2MzU0ZjJmYTU3MjI1MTg4Y2MwODM5LkR7+Q==: 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzQwMWY4ZGIzMmQwOGMyZDg2MWViNTFiN2RjNDZmMjY3ZjdmNzY1NGMxZGEzNGFkY0VPNQ==: 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTE5MzYwMTE5ODg5ZTQ0NzZlOWE0NDhmMmE2MzU0ZjJmYTU3MjI1MTg4Y2MwODM5LkR7+Q==: ]] 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTE5MzYwMTE5ODg5ZTQ0NzZlOWE0NDhmMmE2MzU0ZjJmYTU3MjI1MTg4Y2MwODM5LkR7+Q==: 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.863 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.122 nvme0n1 00:26:06.122 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.122 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.122 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.122 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.122 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.122 18:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.122 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.122 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.122 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.122 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.122 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.123 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.123 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:26:06.123 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.123 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:06.123 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:06.123 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:06.123 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmFhYjA4YmU0OWVhY2E2ZjRmM2ZjYjRlYTk4YTI5M2N4L+7k: 00:26:06.123 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzgyMmMxNTA1YmYxYzc1MmYxZGEyZDM3MDQ0NTQxNzjEDxuC: 00:26:06.123 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:06.123 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:06.123 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmFhYjA4YmU0OWVhY2E2ZjRmM2ZjYjRlYTk4YTI5M2N4L+7k: 00:26:06.123 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzgyMmMxNTA1YmYxYzc1MmYxZGEyZDM3MDQ0NTQxNzjEDxuC: ]] 00:26:06.123 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzgyMmMxNTA1YmYxYzc1MmYxZGEyZDM3MDQ0NTQxNzjEDxuC: 00:26:06.123 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:26:06.123 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.123 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:06.123 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:06.123 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:06.123 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.123 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:06.123 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.123 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.123 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.123 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.123 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:06.123 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:06.123 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:06.123 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.123 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.123 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:06.123 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.123 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:06.123 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:06.123 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:06.123 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:06.123 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.123 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.382 nvme0n1 00:26:06.382 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.382 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.382 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.382 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.382 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.382 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.382 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.382 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.382 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.382 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.382 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.382 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.382 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:26:06.382 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.382 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:06.382 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:06.382 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:06.382 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDAyNzIzYmZlYzQ5OTM0MDUzMWM4MWYyYTU4MDNiYjQ5NjM4ODkzNDJjMzFkOWFjFhhVaw==: 00:26:06.382 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDBkNWU2MThlMzE4ZDg3ZTBiZDM0MWU1ODEzM2I5YjEBNVUV: 00:26:06.382 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:06.382 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:06.382 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDAyNzIzYmZlYzQ5OTM0MDUzMWM4MWYyYTU4MDNiYjQ5NjM4ODkzNDJjMzFkOWFjFhhVaw==: 00:26:06.382 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDBkNWU2MThlMzE4ZDg3ZTBiZDM0MWU1ODEzM2I5YjEBNVUV: ]] 00:26:06.382 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDBkNWU2MThlMzE4ZDg3ZTBiZDM0MWU1ODEzM2I5YjEBNVUV: 00:26:06.382 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:26:06.382 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.382 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:06.382 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:06.382 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:06.382 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.382 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:06.382 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.382 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.382 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.382 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.382 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:06.382 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:06.382 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:06.382 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.382 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.382 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:06.382 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.382 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:06.382 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:06.382 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:06.382 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:06.382 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.382 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.382 nvme0n1 00:26:06.382 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.382 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.382 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.382 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.382 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.641 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.641 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.641 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.641 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.641 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.641 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.641 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.641 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:26:06.641 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.641 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:06.641 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:06.641 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:06.641 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWIyZTAwY2U4ODEyMDdlMzQ4NmM3MjJkYmNjYTA0ZmRmZDYyY2M3NTAyYzQ0ODMxOWYxMDUzNDBjYmQyMTgwM9hyp4w=: 00:26:06.641 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:06.641 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:06.641 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:06.641 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWIyZTAwY2U4ODEyMDdlMzQ4NmM3MjJkYmNjYTA0ZmRmZDYyY2M3NTAyYzQ0ODMxOWYxMDUzNDBjYmQyMTgwM9hyp4w=: 00:26:06.641 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:06.641 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:26:06.641 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.641 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:06.641 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:06.641 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:06.641 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.641 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:06.641 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.641 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.641 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.641 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.641 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:06.641 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:06.641 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:06.641 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.641 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.641 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:06.641 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.641 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:06.641 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:06.641 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:06.641 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:06.641 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.641 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.641 nvme0n1 00:26:06.641 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.641 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.641 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.641 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.641 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.641 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.900 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.900 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.900 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.900 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.900 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.900 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:06.900 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.900 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:26:06.900 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.900 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:06.900 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:06.900 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:06.900 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmZlZGQ4ZjgzYTc5YjQwNDBmMTZkNDU4YTc4NDdlNTP17YJy: 00:26:06.900 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTUwNGI3NGY3NjVmZTIwZDJhNjQ0ODQ3YjkwNjE2MWNiZjQ4YzBhYzhjODY2YjllOTkxYzY2N2M1ZjQ3NGRkZBMSeHA=: 00:26:06.900 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:06.900 18:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:07.469 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmZlZGQ4ZjgzYTc5YjQwNDBmMTZkNDU4YTc4NDdlNTP17YJy: 00:26:07.469 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTUwNGI3NGY3NjVmZTIwZDJhNjQ0ODQ3YjkwNjE2MWNiZjQ4YzBhYzhjODY2YjllOTkxYzY2N2M1ZjQ3NGRkZBMSeHA=: ]] 00:26:07.469 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTUwNGI3NGY3NjVmZTIwZDJhNjQ0ODQ3YjkwNjE2MWNiZjQ4YzBhYzhjODY2YjllOTkxYzY2N2M1ZjQ3NGRkZBMSeHA=: 00:26:07.469 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:26:07.469 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.469 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:07.469 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:07.469 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:07.469 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.469 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:07.469 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.469 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.469 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.469 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.469 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:07.469 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:07.469 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:07.469 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.469 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.469 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:07.469 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.469 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:07.469 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:07.469 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:07.469 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:07.469 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.469 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.728 nvme0n1 00:26:07.728 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.728 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.728 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.728 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.728 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.728 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.728 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.728 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.729 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.729 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.729 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.729 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.729 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:26:07.729 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.729 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:07.729 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:07.729 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:07.729 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzQwMWY4ZGIzMmQwOGMyZDg2MWViNTFiN2RjNDZmMjY3ZjdmNzY1NGMxZGEzNGFkY0VPNQ==: 00:26:07.729 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTE5MzYwMTE5ODg5ZTQ0NzZlOWE0NDhmMmE2MzU0ZjJmYTU3MjI1MTg4Y2MwODM5LkR7+Q==: 00:26:07.729 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:07.729 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:07.729 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzQwMWY4ZGIzMmQwOGMyZDg2MWViNTFiN2RjNDZmMjY3ZjdmNzY1NGMxZGEzNGFkY0VPNQ==: 00:26:07.729 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTE5MzYwMTE5ODg5ZTQ0NzZlOWE0NDhmMmE2MzU0ZjJmYTU3MjI1MTg4Y2MwODM5LkR7+Q==: ]] 00:26:07.729 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTE5MzYwMTE5ODg5ZTQ0NzZlOWE0NDhmMmE2MzU0ZjJmYTU3MjI1MTg4Y2MwODM5LkR7+Q==: 00:26:07.729 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:26:07.729 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.729 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:07.729 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:07.729 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:07.729 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.729 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:07.729 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.729 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.729 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.729 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.729 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:07.729 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:07.729 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:07.729 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.729 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.729 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:07.729 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.729 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:07.729 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:07.729 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:07.729 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:07.729 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.729 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.988 nvme0n1 00:26:07.988 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.988 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.988 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.988 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.988 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.988 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.988 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.988 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.988 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.988 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.988 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.988 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.988 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:26:07.988 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.988 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:07.988 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:07.988 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:07.988 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmFhYjA4YmU0OWVhY2E2ZjRmM2ZjYjRlYTk4YTI5M2N4L+7k: 00:26:07.988 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzgyMmMxNTA1YmYxYzc1MmYxZGEyZDM3MDQ0NTQxNzjEDxuC: 00:26:07.988 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:07.989 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:07.989 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmFhYjA4YmU0OWVhY2E2ZjRmM2ZjYjRlYTk4YTI5M2N4L+7k: 00:26:07.989 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzgyMmMxNTA1YmYxYzc1MmYxZGEyZDM3MDQ0NTQxNzjEDxuC: ]] 00:26:07.989 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzgyMmMxNTA1YmYxYzc1MmYxZGEyZDM3MDQ0NTQxNzjEDxuC: 00:26:07.989 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:26:07.989 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.989 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:07.989 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:07.989 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:07.989 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.989 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:07.989 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.989 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.989 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.989 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.989 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:07.989 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:07.989 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:07.989 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.989 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.989 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:07.989 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.989 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:07.989 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:07.989 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:07.989 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:07.989 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.989 18:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.248 nvme0n1 00:26:08.248 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.248 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.248 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.248 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.248 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.248 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.248 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.248 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.248 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.248 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.248 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.248 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.248 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:26:08.248 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.248 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:08.248 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:08.248 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:08.248 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDAyNzIzYmZlYzQ5OTM0MDUzMWM4MWYyYTU4MDNiYjQ5NjM4ODkzNDJjMzFkOWFjFhhVaw==: 00:26:08.248 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDBkNWU2MThlMzE4ZDg3ZTBiZDM0MWU1ODEzM2I5YjEBNVUV: 00:26:08.248 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:08.248 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:08.248 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDAyNzIzYmZlYzQ5OTM0MDUzMWM4MWYyYTU4MDNiYjQ5NjM4ODkzNDJjMzFkOWFjFhhVaw==: 00:26:08.248 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDBkNWU2MThlMzE4ZDg3ZTBiZDM0MWU1ODEzM2I5YjEBNVUV: ]] 00:26:08.248 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDBkNWU2MThlMzE4ZDg3ZTBiZDM0MWU1ODEzM2I5YjEBNVUV: 00:26:08.248 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:26:08.248 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.248 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:08.248 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:08.248 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:08.248 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.248 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:08.248 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.248 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.248 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.248 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.248 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:08.248 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:08.248 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:08.248 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.248 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.248 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:08.248 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.248 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:08.248 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:08.248 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:08.248 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:08.248 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.248 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.507 nvme0n1 00:26:08.507 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.507 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.507 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.507 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.507 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.507 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.507 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.507 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.507 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.507 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.766 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.766 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.766 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:26:08.766 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.766 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:08.766 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:08.766 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:08.766 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWIyZTAwY2U4ODEyMDdlMzQ4NmM3MjJkYmNjYTA0ZmRmZDYyY2M3NTAyYzQ0ODMxOWYxMDUzNDBjYmQyMTgwM9hyp4w=: 00:26:08.766 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:08.766 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:08.766 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:08.766 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWIyZTAwY2U4ODEyMDdlMzQ4NmM3MjJkYmNjYTA0ZmRmZDYyY2M3NTAyYzQ0ODMxOWYxMDUzNDBjYmQyMTgwM9hyp4w=: 00:26:08.766 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:08.766 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:26:08.766 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.766 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:08.766 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:08.766 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:08.766 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.766 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:08.766 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.766 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.766 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.766 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.766 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:08.766 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:08.766 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:08.766 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.766 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.766 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:08.766 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.766 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:08.766 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:08.766 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:08.766 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:08.766 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.766 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.766 nvme0n1 00:26:08.766 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.766 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.766 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.766 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.766 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.766 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.025 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.025 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.025 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.025 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.025 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.025 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:09.025 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.025 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:26:09.025 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.025 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:09.025 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:09.025 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:09.025 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmZlZGQ4ZjgzYTc5YjQwNDBmMTZkNDU4YTc4NDdlNTP17YJy: 00:26:09.025 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTUwNGI3NGY3NjVmZTIwZDJhNjQ0ODQ3YjkwNjE2MWNiZjQ4YzBhYzhjODY2YjllOTkxYzY2N2M1ZjQ3NGRkZBMSeHA=: 00:26:09.025 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:09.025 18:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:10.925 18:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmZlZGQ4ZjgzYTc5YjQwNDBmMTZkNDU4YTc4NDdlNTP17YJy: 00:26:10.925 18:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTUwNGI3NGY3NjVmZTIwZDJhNjQ0ODQ3YjkwNjE2MWNiZjQ4YzBhYzhjODY2YjllOTkxYzY2N2M1ZjQ3NGRkZBMSeHA=: ]] 00:26:10.925 18:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTUwNGI3NGY3NjVmZTIwZDJhNjQ0ODQ3YjkwNjE2MWNiZjQ4YzBhYzhjODY2YjllOTkxYzY2N2M1ZjQ3NGRkZBMSeHA=: 00:26:10.925 18:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:26:10.925 18:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.925 18:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:10.925 18:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:10.925 18:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:10.925 18:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.925 18:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:10.925 18:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.925 18:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.925 18:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.925 18:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.925 18:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:10.925 18:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:10.925 18:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:10.925 18:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.925 18:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.925 18:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:10.925 18:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.925 18:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:10.925 18:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:10.925 18:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:10.925 18:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:10.925 18:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.925 18:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.183 nvme0n1 00:26:11.183 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.183 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.183 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.183 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.183 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.183 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.183 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.183 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.183 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.183 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.183 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.183 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.183 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:26:11.183 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.183 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:11.183 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:11.183 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:11.183 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzQwMWY4ZGIzMmQwOGMyZDg2MWViNTFiN2RjNDZmMjY3ZjdmNzY1NGMxZGEzNGFkY0VPNQ==: 00:26:11.183 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTE5MzYwMTE5ODg5ZTQ0NzZlOWE0NDhmMmE2MzU0ZjJmYTU3MjI1MTg4Y2MwODM5LkR7+Q==: 00:26:11.184 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:11.184 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:11.184 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzQwMWY4ZGIzMmQwOGMyZDg2MWViNTFiN2RjNDZmMjY3ZjdmNzY1NGMxZGEzNGFkY0VPNQ==: 00:26:11.184 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTE5MzYwMTE5ODg5ZTQ0NzZlOWE0NDhmMmE2MzU0ZjJmYTU3MjI1MTg4Y2MwODM5LkR7+Q==: ]] 00:26:11.184 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTE5MzYwMTE5ODg5ZTQ0NzZlOWE0NDhmMmE2MzU0ZjJmYTU3MjI1MTg4Y2MwODM5LkR7+Q==: 00:26:11.184 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:26:11.184 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.184 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:11.184 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:11.184 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:11.184 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.184 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:11.184 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.184 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.184 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.184 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.184 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:11.184 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:11.184 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:11.184 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.184 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.184 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:11.184 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.184 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:11.184 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:11.184 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:11.184 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:11.184 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.184 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.443 nvme0n1 00:26:11.443 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.443 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.443 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.701 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.701 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.701 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.701 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.701 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.701 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.701 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.701 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.701 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.701 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:26:11.701 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.701 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:11.701 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:11.701 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:11.701 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmFhYjA4YmU0OWVhY2E2ZjRmM2ZjYjRlYTk4YTI5M2N4L+7k: 00:26:11.701 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzgyMmMxNTA1YmYxYzc1MmYxZGEyZDM3MDQ0NTQxNzjEDxuC: 00:26:11.701 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:11.701 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:11.701 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmFhYjA4YmU0OWVhY2E2ZjRmM2ZjYjRlYTk4YTI5M2N4L+7k: 00:26:11.701 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzgyMmMxNTA1YmYxYzc1MmYxZGEyZDM3MDQ0NTQxNzjEDxuC: ]] 00:26:11.701 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzgyMmMxNTA1YmYxYzc1MmYxZGEyZDM3MDQ0NTQxNzjEDxuC: 00:26:11.701 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:26:11.701 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.701 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:11.701 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:11.701 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:11.701 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.701 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:11.701 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.701 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.701 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.701 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.701 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:11.701 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:11.701 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:11.701 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.701 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.701 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:11.701 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.701 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:11.701 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:11.701 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:11.701 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:11.701 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.701 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.960 nvme0n1 00:26:11.960 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.960 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.960 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.960 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.960 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.960 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.960 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.960 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.960 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.960 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.233 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.233 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.233 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:26:12.233 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.233 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:12.233 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:12.233 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:12.233 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDAyNzIzYmZlYzQ5OTM0MDUzMWM4MWYyYTU4MDNiYjQ5NjM4ODkzNDJjMzFkOWFjFhhVaw==: 00:26:12.233 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDBkNWU2MThlMzE4ZDg3ZTBiZDM0MWU1ODEzM2I5YjEBNVUV: 00:26:12.233 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:12.233 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:12.233 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDAyNzIzYmZlYzQ5OTM0MDUzMWM4MWYyYTU4MDNiYjQ5NjM4ODkzNDJjMzFkOWFjFhhVaw==: 00:26:12.233 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDBkNWU2MThlMzE4ZDg3ZTBiZDM0MWU1ODEzM2I5YjEBNVUV: ]] 00:26:12.233 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDBkNWU2MThlMzE4ZDg3ZTBiZDM0MWU1ODEzM2I5YjEBNVUV: 00:26:12.233 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:26:12.233 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.233 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:12.233 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:12.233 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:12.233 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.233 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:12.233 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.233 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.233 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.233 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.233 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:12.233 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:12.233 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:12.233 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.233 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.233 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:12.233 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.233 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:12.233 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:12.233 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:12.233 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:12.233 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.233 18:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.491 nvme0n1 00:26:12.492 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.492 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.492 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.492 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.492 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.492 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.492 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.492 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.492 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.492 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.492 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.492 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.492 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:26:12.492 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.492 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:12.492 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:12.492 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:12.492 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWIyZTAwY2U4ODEyMDdlMzQ4NmM3MjJkYmNjYTA0ZmRmZDYyY2M3NTAyYzQ0ODMxOWYxMDUzNDBjYmQyMTgwM9hyp4w=: 00:26:12.492 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:12.492 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:12.492 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:12.492 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWIyZTAwY2U4ODEyMDdlMzQ4NmM3MjJkYmNjYTA0ZmRmZDYyY2M3NTAyYzQ0ODMxOWYxMDUzNDBjYmQyMTgwM9hyp4w=: 00:26:12.492 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:12.492 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:26:12.492 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.492 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:12.492 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:12.492 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:12.492 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.492 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:12.492 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.492 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.492 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.492 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.492 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:12.492 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:12.492 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:12.492 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.492 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.492 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:12.492 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.492 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:12.492 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:12.492 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:12.492 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:12.492 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.492 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.058 nvme0n1 00:26:13.058 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.058 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.058 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.058 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.058 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.059 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.059 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.059 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.059 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.059 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.059 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.059 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:13.059 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.059 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:26:13.059 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.059 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:13.059 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:13.059 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:13.059 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmZlZGQ4ZjgzYTc5YjQwNDBmMTZkNDU4YTc4NDdlNTP17YJy: 00:26:13.059 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTUwNGI3NGY3NjVmZTIwZDJhNjQ0ODQ3YjkwNjE2MWNiZjQ4YzBhYzhjODY2YjllOTkxYzY2N2M1ZjQ3NGRkZBMSeHA=: 00:26:13.059 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:13.059 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:13.059 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmZlZGQ4ZjgzYTc5YjQwNDBmMTZkNDU4YTc4NDdlNTP17YJy: 00:26:13.059 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTUwNGI3NGY3NjVmZTIwZDJhNjQ0ODQ3YjkwNjE2MWNiZjQ4YzBhYzhjODY2YjllOTkxYzY2N2M1ZjQ3NGRkZBMSeHA=: ]] 00:26:13.059 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTUwNGI3NGY3NjVmZTIwZDJhNjQ0ODQ3YjkwNjE2MWNiZjQ4YzBhYzhjODY2YjllOTkxYzY2N2M1ZjQ3NGRkZBMSeHA=: 00:26:13.059 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:26:13.059 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.059 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:13.059 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:13.059 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:13.059 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.059 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:13.059 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.059 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.059 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.059 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.059 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:13.059 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:13.059 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:13.059 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.059 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.059 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:13.059 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.059 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:13.059 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:13.059 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:13.059 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:13.059 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.059 18:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.625 nvme0n1 00:26:13.625 18:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.626 18:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.626 18:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.626 18:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.626 18:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.626 18:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.626 18:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.626 18:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.626 18:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.626 18:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.626 18:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.626 18:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.626 18:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:26:13.626 18:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.626 18:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:13.626 18:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:13.626 18:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:13.626 18:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzQwMWY4ZGIzMmQwOGMyZDg2MWViNTFiN2RjNDZmMjY3ZjdmNzY1NGMxZGEzNGFkY0VPNQ==: 00:26:13.626 18:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTE5MzYwMTE5ODg5ZTQ0NzZlOWE0NDhmMmE2MzU0ZjJmYTU3MjI1MTg4Y2MwODM5LkR7+Q==: 00:26:13.626 18:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:13.626 18:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:13.626 18:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzQwMWY4ZGIzMmQwOGMyZDg2MWViNTFiN2RjNDZmMjY3ZjdmNzY1NGMxZGEzNGFkY0VPNQ==: 00:26:13.626 18:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTE5MzYwMTE5ODg5ZTQ0NzZlOWE0NDhmMmE2MzU0ZjJmYTU3MjI1MTg4Y2MwODM5LkR7+Q==: ]] 00:26:13.626 18:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTE5MzYwMTE5ODg5ZTQ0NzZlOWE0NDhmMmE2MzU0ZjJmYTU3MjI1MTg4Y2MwODM5LkR7+Q==: 00:26:13.626 18:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:26:13.626 18:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.626 18:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:13.626 18:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:13.626 18:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:13.626 18:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.626 18:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:13.626 18:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.626 18:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.626 18:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.626 18:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.626 18:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:13.626 18:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:13.626 18:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:13.626 18:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.626 18:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.626 18:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:13.626 18:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.626 18:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:13.626 18:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:13.626 18:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:13.626 18:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:13.626 18:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.626 18:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.561 nvme0n1 00:26:14.561 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.561 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.561 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.561 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.561 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.561 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.561 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.561 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.561 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.561 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.561 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.561 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.561 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:26:14.561 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.561 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:14.561 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:14.561 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:14.561 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmFhYjA4YmU0OWVhY2E2ZjRmM2ZjYjRlYTk4YTI5M2N4L+7k: 00:26:14.561 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzgyMmMxNTA1YmYxYzc1MmYxZGEyZDM3MDQ0NTQxNzjEDxuC: 00:26:14.561 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:14.561 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:14.561 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmFhYjA4YmU0OWVhY2E2ZjRmM2ZjYjRlYTk4YTI5M2N4L+7k: 00:26:14.562 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzgyMmMxNTA1YmYxYzc1MmYxZGEyZDM3MDQ0NTQxNzjEDxuC: ]] 00:26:14.562 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzgyMmMxNTA1YmYxYzc1MmYxZGEyZDM3MDQ0NTQxNzjEDxuC: 00:26:14.562 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:26:14.562 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.562 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:14.562 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:14.562 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:14.562 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.562 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:14.562 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.562 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.562 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.562 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.562 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:14.562 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:14.562 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:14.562 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.562 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.562 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:14.562 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.562 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:14.562 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:14.562 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:14.562 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:14.562 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.562 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.129 nvme0n1 00:26:15.129 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.129 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.129 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.129 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.129 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.129 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.129 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.129 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.129 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.129 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.129 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.129 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.129 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:26:15.129 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.129 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:15.129 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:15.129 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:15.129 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDAyNzIzYmZlYzQ5OTM0MDUzMWM4MWYyYTU4MDNiYjQ5NjM4ODkzNDJjMzFkOWFjFhhVaw==: 00:26:15.129 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDBkNWU2MThlMzE4ZDg3ZTBiZDM0MWU1ODEzM2I5YjEBNVUV: 00:26:15.129 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:15.129 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:15.129 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDAyNzIzYmZlYzQ5OTM0MDUzMWM4MWYyYTU4MDNiYjQ5NjM4ODkzNDJjMzFkOWFjFhhVaw==: 00:26:15.129 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDBkNWU2MThlMzE4ZDg3ZTBiZDM0MWU1ODEzM2I5YjEBNVUV: ]] 00:26:15.129 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDBkNWU2MThlMzE4ZDg3ZTBiZDM0MWU1ODEzM2I5YjEBNVUV: 00:26:15.129 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:26:15.129 18:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.129 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:15.129 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:15.129 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:15.129 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.129 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:15.129 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.129 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.129 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.130 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.130 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:15.130 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:15.130 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:15.130 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.130 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.130 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:15.130 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.130 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:15.130 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:15.130 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:15.130 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:15.130 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.130 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.751 nvme0n1 00:26:15.751 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.751 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.751 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.751 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.751 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.751 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.751 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.751 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.751 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.751 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.751 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.751 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.751 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:26:15.751 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.751 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:15.751 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:15.751 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:15.751 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWIyZTAwY2U4ODEyMDdlMzQ4NmM3MjJkYmNjYTA0ZmRmZDYyY2M3NTAyYzQ0ODMxOWYxMDUzNDBjYmQyMTgwM9hyp4w=: 00:26:15.751 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:15.751 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:15.751 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:15.752 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWIyZTAwY2U4ODEyMDdlMzQ4NmM3MjJkYmNjYTA0ZmRmZDYyY2M3NTAyYzQ0ODMxOWYxMDUzNDBjYmQyMTgwM9hyp4w=: 00:26:15.752 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:15.752 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:26:15.752 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.752 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:15.752 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:15.752 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:15.752 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.752 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:15.752 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.752 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.752 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.752 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.752 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:15.752 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:15.752 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:15.752 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.752 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.752 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:15.752 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.752 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:15.752 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:15.752 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:15.752 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:15.752 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.752 18:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.688 nvme0n1 00:26:16.688 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.688 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.688 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.688 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.688 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.688 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.688 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.688 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.688 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.688 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.688 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.688 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:16.688 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:16.688 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.688 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:26:16.688 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.688 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:16.688 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:16.688 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:16.688 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmZlZGQ4ZjgzYTc5YjQwNDBmMTZkNDU4YTc4NDdlNTP17YJy: 00:26:16.688 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTUwNGI3NGY3NjVmZTIwZDJhNjQ0ODQ3YjkwNjE2MWNiZjQ4YzBhYzhjODY2YjllOTkxYzY2N2M1ZjQ3NGRkZBMSeHA=: 00:26:16.688 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:16.688 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmZlZGQ4ZjgzYTc5YjQwNDBmMTZkNDU4YTc4NDdlNTP17YJy: 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTUwNGI3NGY3NjVmZTIwZDJhNjQ0ODQ3YjkwNjE2MWNiZjQ4YzBhYzhjODY2YjllOTkxYzY2N2M1ZjQ3NGRkZBMSeHA=: ]] 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTUwNGI3NGY3NjVmZTIwZDJhNjQ0ODQ3YjkwNjE2MWNiZjQ4YzBhYzhjODY2YjllOTkxYzY2N2M1ZjQ3NGRkZBMSeHA=: 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.689 nvme0n1 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzQwMWY4ZGIzMmQwOGMyZDg2MWViNTFiN2RjNDZmMjY3ZjdmNzY1NGMxZGEzNGFkY0VPNQ==: 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTE5MzYwMTE5ODg5ZTQ0NzZlOWE0NDhmMmE2MzU0ZjJmYTU3MjI1MTg4Y2MwODM5LkR7+Q==: 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzQwMWY4ZGIzMmQwOGMyZDg2MWViNTFiN2RjNDZmMjY3ZjdmNzY1NGMxZGEzNGFkY0VPNQ==: 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTE5MzYwMTE5ODg5ZTQ0NzZlOWE0NDhmMmE2MzU0ZjJmYTU3MjI1MTg4Y2MwODM5LkR7+Q==: ]] 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTE5MzYwMTE5ODg5ZTQ0NzZlOWE0NDhmMmE2MzU0ZjJmYTU3MjI1MTg4Y2MwODM5LkR7+Q==: 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.689 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.948 nvme0n1 00:26:16.948 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.948 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.948 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.948 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.948 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.948 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.948 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.948 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.948 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.948 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.948 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.948 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.948 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:26:16.948 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.948 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:16.948 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:16.948 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:16.948 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmFhYjA4YmU0OWVhY2E2ZjRmM2ZjYjRlYTk4YTI5M2N4L+7k: 00:26:16.948 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzgyMmMxNTA1YmYxYzc1MmYxZGEyZDM3MDQ0NTQxNzjEDxuC: 00:26:16.948 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:16.948 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:16.948 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmFhYjA4YmU0OWVhY2E2ZjRmM2ZjYjRlYTk4YTI5M2N4L+7k: 00:26:16.948 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzgyMmMxNTA1YmYxYzc1MmYxZGEyZDM3MDQ0NTQxNzjEDxuC: ]] 00:26:16.948 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzgyMmMxNTA1YmYxYzc1MmYxZGEyZDM3MDQ0NTQxNzjEDxuC: 00:26:16.948 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:26:16.948 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.948 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:16.948 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:16.948 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:16.948 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.948 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:16.948 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.948 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.948 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.948 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.948 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:16.948 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:16.948 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:16.948 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.948 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.948 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:16.948 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.948 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:16.948 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:16.948 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:16.948 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:16.948 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.948 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.948 nvme0n1 00:26:16.948 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.948 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.948 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.949 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.949 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.949 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.949 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.949 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.949 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.949 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.207 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.207 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.207 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:26:17.207 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.207 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:17.207 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:17.207 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:17.207 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDAyNzIzYmZlYzQ5OTM0MDUzMWM4MWYyYTU4MDNiYjQ5NjM4ODkzNDJjMzFkOWFjFhhVaw==: 00:26:17.207 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDBkNWU2MThlMzE4ZDg3ZTBiZDM0MWU1ODEzM2I5YjEBNVUV: 00:26:17.208 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:17.208 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:17.208 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDAyNzIzYmZlYzQ5OTM0MDUzMWM4MWYyYTU4MDNiYjQ5NjM4ODkzNDJjMzFkOWFjFhhVaw==: 00:26:17.208 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDBkNWU2MThlMzE4ZDg3ZTBiZDM0MWU1ODEzM2I5YjEBNVUV: ]] 00:26:17.208 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDBkNWU2MThlMzE4ZDg3ZTBiZDM0MWU1ODEzM2I5YjEBNVUV: 00:26:17.208 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:26:17.208 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.208 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:17.208 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:17.208 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:17.208 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.208 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:17.208 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.208 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.208 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.208 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.208 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:17.208 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:17.208 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:17.208 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.208 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.208 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:17.208 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.208 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:17.208 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:17.208 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:17.208 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:17.208 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.208 18:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.208 nvme0n1 00:26:17.208 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.208 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.208 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.208 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.208 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.208 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.208 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.208 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.208 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.208 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.208 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.208 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.208 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:26:17.208 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.208 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:17.208 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:17.208 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:17.208 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWIyZTAwY2U4ODEyMDdlMzQ4NmM3MjJkYmNjYTA0ZmRmZDYyY2M3NTAyYzQ0ODMxOWYxMDUzNDBjYmQyMTgwM9hyp4w=: 00:26:17.208 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:17.208 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:17.208 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:17.208 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWIyZTAwY2U4ODEyMDdlMzQ4NmM3MjJkYmNjYTA0ZmRmZDYyY2M3NTAyYzQ0ODMxOWYxMDUzNDBjYmQyMTgwM9hyp4w=: 00:26:17.208 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:17.208 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:26:17.208 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.208 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:17.208 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:17.208 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:17.208 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.208 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:17.208 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.208 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.208 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.208 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.208 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:17.208 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:17.208 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:17.208 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.208 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.208 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:17.208 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.208 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:17.208 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:17.208 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:17.208 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:17.208 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.208 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.467 nvme0n1 00:26:17.467 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.467 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.467 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.467 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.467 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.467 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.467 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.467 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.467 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.467 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.467 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.467 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:17.467 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.467 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:26:17.467 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.467 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:17.467 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:17.467 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:17.467 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmZlZGQ4ZjgzYTc5YjQwNDBmMTZkNDU4YTc4NDdlNTP17YJy: 00:26:17.467 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTUwNGI3NGY3NjVmZTIwZDJhNjQ0ODQ3YjkwNjE2MWNiZjQ4YzBhYzhjODY2YjllOTkxYzY2N2M1ZjQ3NGRkZBMSeHA=: 00:26:17.467 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:17.467 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:17.467 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmZlZGQ4ZjgzYTc5YjQwNDBmMTZkNDU4YTc4NDdlNTP17YJy: 00:26:17.467 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTUwNGI3NGY3NjVmZTIwZDJhNjQ0ODQ3YjkwNjE2MWNiZjQ4YzBhYzhjODY2YjllOTkxYzY2N2M1ZjQ3NGRkZBMSeHA=: ]] 00:26:17.467 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTUwNGI3NGY3NjVmZTIwZDJhNjQ0ODQ3YjkwNjE2MWNiZjQ4YzBhYzhjODY2YjllOTkxYzY2N2M1ZjQ3NGRkZBMSeHA=: 00:26:17.467 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:26:17.467 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.467 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:17.467 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:17.467 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:17.467 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.467 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:17.467 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.467 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.467 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.467 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.467 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:17.467 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:17.467 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:17.467 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.467 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.467 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:17.467 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.467 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:17.467 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:17.467 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:17.467 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:17.467 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.467 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.467 nvme0n1 00:26:17.725 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.725 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.725 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.725 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.725 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.725 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.725 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.725 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.725 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.725 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.725 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.725 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.725 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:26:17.725 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.725 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:17.725 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:17.725 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:17.725 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzQwMWY4ZGIzMmQwOGMyZDg2MWViNTFiN2RjNDZmMjY3ZjdmNzY1NGMxZGEzNGFkY0VPNQ==: 00:26:17.725 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTE5MzYwMTE5ODg5ZTQ0NzZlOWE0NDhmMmE2MzU0ZjJmYTU3MjI1MTg4Y2MwODM5LkR7+Q==: 00:26:17.725 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:17.725 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:17.725 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzQwMWY4ZGIzMmQwOGMyZDg2MWViNTFiN2RjNDZmMjY3ZjdmNzY1NGMxZGEzNGFkY0VPNQ==: 00:26:17.725 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTE5MzYwMTE5ODg5ZTQ0NzZlOWE0NDhmMmE2MzU0ZjJmYTU3MjI1MTg4Y2MwODM5LkR7+Q==: ]] 00:26:17.725 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTE5MzYwMTE5ODg5ZTQ0NzZlOWE0NDhmMmE2MzU0ZjJmYTU3MjI1MTg4Y2MwODM5LkR7+Q==: 00:26:17.725 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:26:17.725 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.725 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:17.725 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:17.725 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:17.725 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.725 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:17.725 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.725 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.725 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.725 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.725 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:17.725 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:17.725 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:17.725 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.725 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.726 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:17.726 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.726 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:17.726 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:17.726 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:17.726 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:17.726 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.726 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.726 nvme0n1 00:26:17.726 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.726 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.726 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.726 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.726 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.726 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.983 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.983 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.983 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.983 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.983 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.983 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.983 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:26:17.983 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.983 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:17.983 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:17.983 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:17.983 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmFhYjA4YmU0OWVhY2E2ZjRmM2ZjYjRlYTk4YTI5M2N4L+7k: 00:26:17.983 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzgyMmMxNTA1YmYxYzc1MmYxZGEyZDM3MDQ0NTQxNzjEDxuC: 00:26:17.983 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:17.983 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:17.983 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmFhYjA4YmU0OWVhY2E2ZjRmM2ZjYjRlYTk4YTI5M2N4L+7k: 00:26:17.983 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzgyMmMxNTA1YmYxYzc1MmYxZGEyZDM3MDQ0NTQxNzjEDxuC: ]] 00:26:17.983 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzgyMmMxNTA1YmYxYzc1MmYxZGEyZDM3MDQ0NTQxNzjEDxuC: 00:26:17.983 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:26:17.983 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.983 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:17.983 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:17.983 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:17.983 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.983 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:17.983 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.983 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.983 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.983 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.983 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:17.983 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:17.983 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:17.983 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.983 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.983 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:17.983 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.983 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:17.983 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:17.983 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:17.983 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:17.983 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.983 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.983 nvme0n1 00:26:17.983 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.983 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.983 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.983 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.983 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.983 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.983 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.983 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.983 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.983 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.983 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.983 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.983 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:26:17.983 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.983 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:17.984 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:17.984 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:17.984 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDAyNzIzYmZlYzQ5OTM0MDUzMWM4MWYyYTU4MDNiYjQ5NjM4ODkzNDJjMzFkOWFjFhhVaw==: 00:26:17.984 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDBkNWU2MThlMzE4ZDg3ZTBiZDM0MWU1ODEzM2I5YjEBNVUV: 00:26:17.984 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:17.984 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:17.984 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDAyNzIzYmZlYzQ5OTM0MDUzMWM4MWYyYTU4MDNiYjQ5NjM4ODkzNDJjMzFkOWFjFhhVaw==: 00:26:17.984 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDBkNWU2MThlMzE4ZDg3ZTBiZDM0MWU1ODEzM2I5YjEBNVUV: ]] 00:26:17.984 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDBkNWU2MThlMzE4ZDg3ZTBiZDM0MWU1ODEzM2I5YjEBNVUV: 00:26:17.984 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:26:17.984 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.984 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:17.984 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:17.984 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:17.984 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.984 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:17.984 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.984 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.984 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.984 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.984 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:17.984 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:18.241 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:18.241 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.241 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.241 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:18.241 18:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.241 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:18.241 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:18.241 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:18.241 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:18.241 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.241 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.241 nvme0n1 00:26:18.241 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.241 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.241 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.241 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.241 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.241 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.241 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.241 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.241 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.241 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.241 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.241 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.241 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:26:18.241 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.241 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:18.241 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:18.241 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:18.241 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWIyZTAwY2U4ODEyMDdlMzQ4NmM3MjJkYmNjYTA0ZmRmZDYyY2M3NTAyYzQ0ODMxOWYxMDUzNDBjYmQyMTgwM9hyp4w=: 00:26:18.241 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:18.241 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:18.241 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:18.241 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWIyZTAwY2U4ODEyMDdlMzQ4NmM3MjJkYmNjYTA0ZmRmZDYyY2M3NTAyYzQ0ODMxOWYxMDUzNDBjYmQyMTgwM9hyp4w=: 00:26:18.241 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:18.241 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:26:18.241 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.241 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:18.241 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:18.241 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:18.241 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.242 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:18.242 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.242 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.242 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.242 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.242 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:18.242 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:18.242 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:18.242 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.242 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.242 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:18.242 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.242 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:18.242 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:18.242 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:18.242 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:18.242 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.242 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.499 nvme0n1 00:26:18.499 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.499 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.499 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.499 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.499 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.499 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.499 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.499 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.499 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.499 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.499 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.499 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:18.499 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.499 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:26:18.499 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.499 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:18.499 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:18.499 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:18.499 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmZlZGQ4ZjgzYTc5YjQwNDBmMTZkNDU4YTc4NDdlNTP17YJy: 00:26:18.499 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTUwNGI3NGY3NjVmZTIwZDJhNjQ0ODQ3YjkwNjE2MWNiZjQ4YzBhYzhjODY2YjllOTkxYzY2N2M1ZjQ3NGRkZBMSeHA=: 00:26:18.499 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:18.499 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:18.499 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmZlZGQ4ZjgzYTc5YjQwNDBmMTZkNDU4YTc4NDdlNTP17YJy: 00:26:18.499 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTUwNGI3NGY3NjVmZTIwZDJhNjQ0ODQ3YjkwNjE2MWNiZjQ4YzBhYzhjODY2YjllOTkxYzY2N2M1ZjQ3NGRkZBMSeHA=: ]] 00:26:18.499 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTUwNGI3NGY3NjVmZTIwZDJhNjQ0ODQ3YjkwNjE2MWNiZjQ4YzBhYzhjODY2YjllOTkxYzY2N2M1ZjQ3NGRkZBMSeHA=: 00:26:18.499 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:26:18.499 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.499 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:18.499 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:18.499 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:18.499 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.499 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:18.499 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.499 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.499 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.499 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.499 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:18.499 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:18.499 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:18.499 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.499 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.499 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:18.499 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.499 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:18.499 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:18.499 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:18.499 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:18.499 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.499 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.757 nvme0n1 00:26:18.757 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.757 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.757 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.757 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.757 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.757 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.757 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.757 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.757 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.757 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.757 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.757 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.757 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:26:18.757 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.757 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:18.757 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:18.757 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:18.757 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzQwMWY4ZGIzMmQwOGMyZDg2MWViNTFiN2RjNDZmMjY3ZjdmNzY1NGMxZGEzNGFkY0VPNQ==: 00:26:18.757 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTE5MzYwMTE5ODg5ZTQ0NzZlOWE0NDhmMmE2MzU0ZjJmYTU3MjI1MTg4Y2MwODM5LkR7+Q==: 00:26:18.757 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:18.757 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:18.757 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzQwMWY4ZGIzMmQwOGMyZDg2MWViNTFiN2RjNDZmMjY3ZjdmNzY1NGMxZGEzNGFkY0VPNQ==: 00:26:18.757 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTE5MzYwMTE5ODg5ZTQ0NzZlOWE0NDhmMmE2MzU0ZjJmYTU3MjI1MTg4Y2MwODM5LkR7+Q==: ]] 00:26:18.757 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTE5MzYwMTE5ODg5ZTQ0NzZlOWE0NDhmMmE2MzU0ZjJmYTU3MjI1MTg4Y2MwODM5LkR7+Q==: 00:26:18.757 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:26:18.757 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.757 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:18.757 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:18.757 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:18.757 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.757 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:18.757 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.757 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.757 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.757 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.757 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:18.757 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:18.757 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:18.757 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.757 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.757 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:18.757 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.757 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:18.757 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:18.757 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:18.757 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:18.757 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.757 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.015 nvme0n1 00:26:19.015 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.015 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.015 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.015 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.015 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.015 18:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.015 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.015 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.015 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.015 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.273 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.273 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.273 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:26:19.273 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.273 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:19.273 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:19.273 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:19.273 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmFhYjA4YmU0OWVhY2E2ZjRmM2ZjYjRlYTk4YTI5M2N4L+7k: 00:26:19.273 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzgyMmMxNTA1YmYxYzc1MmYxZGEyZDM3MDQ0NTQxNzjEDxuC: 00:26:19.273 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:19.273 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:19.273 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmFhYjA4YmU0OWVhY2E2ZjRmM2ZjYjRlYTk4YTI5M2N4L+7k: 00:26:19.273 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzgyMmMxNTA1YmYxYzc1MmYxZGEyZDM3MDQ0NTQxNzjEDxuC: ]] 00:26:19.273 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzgyMmMxNTA1YmYxYzc1MmYxZGEyZDM3MDQ0NTQxNzjEDxuC: 00:26:19.273 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:26:19.273 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.273 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:19.273 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:19.273 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:19.273 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.273 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:19.273 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.273 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.273 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.273 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.273 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:19.273 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:19.273 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:19.273 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.273 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.273 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:19.273 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.273 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:19.273 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:19.273 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:19.273 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:19.273 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.273 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.273 nvme0n1 00:26:19.273 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.273 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.273 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.273 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.273 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.273 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.531 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.531 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.531 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.531 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.531 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.531 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.531 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:26:19.531 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.531 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:19.531 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:19.531 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:19.531 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDAyNzIzYmZlYzQ5OTM0MDUzMWM4MWYyYTU4MDNiYjQ5NjM4ODkzNDJjMzFkOWFjFhhVaw==: 00:26:19.531 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDBkNWU2MThlMzE4ZDg3ZTBiZDM0MWU1ODEzM2I5YjEBNVUV: 00:26:19.531 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:19.531 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:19.531 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDAyNzIzYmZlYzQ5OTM0MDUzMWM4MWYyYTU4MDNiYjQ5NjM4ODkzNDJjMzFkOWFjFhhVaw==: 00:26:19.531 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDBkNWU2MThlMzE4ZDg3ZTBiZDM0MWU1ODEzM2I5YjEBNVUV: ]] 00:26:19.531 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDBkNWU2MThlMzE4ZDg3ZTBiZDM0MWU1ODEzM2I5YjEBNVUV: 00:26:19.531 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:26:19.531 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.531 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:19.531 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:19.531 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:19.531 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.531 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:19.531 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.531 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.531 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.531 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.531 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:19.531 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:19.531 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:19.531 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.531 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.531 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:19.531 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.531 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:19.531 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:19.531 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:19.531 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:19.531 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.531 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.531 nvme0n1 00:26:19.531 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.531 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.531 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.531 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.790 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.790 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.790 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.790 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.790 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.790 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.790 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.790 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.790 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:26:19.790 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.790 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:19.790 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:19.790 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:19.790 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWIyZTAwY2U4ODEyMDdlMzQ4NmM3MjJkYmNjYTA0ZmRmZDYyY2M3NTAyYzQ0ODMxOWYxMDUzNDBjYmQyMTgwM9hyp4w=: 00:26:19.790 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:19.790 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:19.790 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:19.790 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWIyZTAwY2U4ODEyMDdlMzQ4NmM3MjJkYmNjYTA0ZmRmZDYyY2M3NTAyYzQ0ODMxOWYxMDUzNDBjYmQyMTgwM9hyp4w=: 00:26:19.790 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:19.790 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:26:19.790 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.790 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:19.790 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:19.790 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:19.790 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.790 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:19.790 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.790 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.790 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.790 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.790 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:19.790 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:19.790 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:19.790 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.790 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.790 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:19.790 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.790 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:19.790 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:19.790 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:19.790 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:19.790 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.790 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.049 nvme0n1 00:26:20.049 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.049 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.049 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.049 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.049 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.049 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.049 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.049 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.049 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.049 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.049 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.049 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:20.049 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.049 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:26:20.049 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.049 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:20.049 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:20.049 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:20.049 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmZlZGQ4ZjgzYTc5YjQwNDBmMTZkNDU4YTc4NDdlNTP17YJy: 00:26:20.049 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTUwNGI3NGY3NjVmZTIwZDJhNjQ0ODQ3YjkwNjE2MWNiZjQ4YzBhYzhjODY2YjllOTkxYzY2N2M1ZjQ3NGRkZBMSeHA=: 00:26:20.049 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:20.049 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:20.049 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmZlZGQ4ZjgzYTc5YjQwNDBmMTZkNDU4YTc4NDdlNTP17YJy: 00:26:20.049 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTUwNGI3NGY3NjVmZTIwZDJhNjQ0ODQ3YjkwNjE2MWNiZjQ4YzBhYzhjODY2YjllOTkxYzY2N2M1ZjQ3NGRkZBMSeHA=: ]] 00:26:20.049 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTUwNGI3NGY3NjVmZTIwZDJhNjQ0ODQ3YjkwNjE2MWNiZjQ4YzBhYzhjODY2YjllOTkxYzY2N2M1ZjQ3NGRkZBMSeHA=: 00:26:20.049 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:26:20.049 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.049 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:20.049 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:20.049 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:20.049 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.049 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:20.049 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.049 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.049 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.049 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.049 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:20.049 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:20.049 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:20.049 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.049 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.049 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:20.049 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.049 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:20.049 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:20.049 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:20.049 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:20.049 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.049 18:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.308 nvme0n1 00:26:20.308 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.308 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.308 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.308 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.308 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.308 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.567 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.567 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.567 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.567 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.567 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.567 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.567 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:26:20.567 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.567 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:20.567 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:20.567 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:20.567 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzQwMWY4ZGIzMmQwOGMyZDg2MWViNTFiN2RjNDZmMjY3ZjdmNzY1NGMxZGEzNGFkY0VPNQ==: 00:26:20.567 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTE5MzYwMTE5ODg5ZTQ0NzZlOWE0NDhmMmE2MzU0ZjJmYTU3MjI1MTg4Y2MwODM5LkR7+Q==: 00:26:20.567 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:20.567 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:20.567 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzQwMWY4ZGIzMmQwOGMyZDg2MWViNTFiN2RjNDZmMjY3ZjdmNzY1NGMxZGEzNGFkY0VPNQ==: 00:26:20.567 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTE5MzYwMTE5ODg5ZTQ0NzZlOWE0NDhmMmE2MzU0ZjJmYTU3MjI1MTg4Y2MwODM5LkR7+Q==: ]] 00:26:20.567 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTE5MzYwMTE5ODg5ZTQ0NzZlOWE0NDhmMmE2MzU0ZjJmYTU3MjI1MTg4Y2MwODM5LkR7+Q==: 00:26:20.567 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:26:20.567 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.567 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:20.567 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:20.567 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:20.567 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.567 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:20.567 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.567 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.567 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.567 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.567 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:20.567 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:20.567 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:20.567 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.567 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.567 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:20.567 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.567 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:20.568 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:20.568 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:20.568 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:20.568 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.568 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.826 nvme0n1 00:26:20.826 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.826 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.826 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.826 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.826 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.826 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.085 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.085 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.085 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.085 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.085 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.085 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.085 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:26:21.085 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.085 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:21.085 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:21.085 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:21.085 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmFhYjA4YmU0OWVhY2E2ZjRmM2ZjYjRlYTk4YTI5M2N4L+7k: 00:26:21.086 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzgyMmMxNTA1YmYxYzc1MmYxZGEyZDM3MDQ0NTQxNzjEDxuC: 00:26:21.086 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:21.086 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:21.086 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmFhYjA4YmU0OWVhY2E2ZjRmM2ZjYjRlYTk4YTI5M2N4L+7k: 00:26:21.086 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzgyMmMxNTA1YmYxYzc1MmYxZGEyZDM3MDQ0NTQxNzjEDxuC: ]] 00:26:21.086 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzgyMmMxNTA1YmYxYzc1MmYxZGEyZDM3MDQ0NTQxNzjEDxuC: 00:26:21.086 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:26:21.086 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.086 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:21.086 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:21.086 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:21.086 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.086 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:21.086 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.086 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.086 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.086 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.086 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:21.086 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:21.086 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:21.086 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.086 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.086 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:21.086 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.086 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:21.086 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:21.086 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:21.086 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:21.086 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.086 18:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.345 nvme0n1 00:26:21.345 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.345 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.345 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.345 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.345 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.345 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.345 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.346 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.346 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.346 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.346 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.346 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.346 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:26:21.346 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.346 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:21.346 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:21.346 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:21.346 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDAyNzIzYmZlYzQ5OTM0MDUzMWM4MWYyYTU4MDNiYjQ5NjM4ODkzNDJjMzFkOWFjFhhVaw==: 00:26:21.346 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDBkNWU2MThlMzE4ZDg3ZTBiZDM0MWU1ODEzM2I5YjEBNVUV: 00:26:21.346 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:21.346 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:21.346 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDAyNzIzYmZlYzQ5OTM0MDUzMWM4MWYyYTU4MDNiYjQ5NjM4ODkzNDJjMzFkOWFjFhhVaw==: 00:26:21.346 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDBkNWU2MThlMzE4ZDg3ZTBiZDM0MWU1ODEzM2I5YjEBNVUV: ]] 00:26:21.346 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDBkNWU2MThlMzE4ZDg3ZTBiZDM0MWU1ODEzM2I5YjEBNVUV: 00:26:21.346 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:26:21.346 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.346 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:21.346 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:21.346 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:21.346 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.346 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:21.346 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.346 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.346 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.346 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.346 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:21.346 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:21.346 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:21.346 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.346 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.346 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:21.346 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.346 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:21.346 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:21.346 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:21.346 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:21.346 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.346 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.913 nvme0n1 00:26:21.913 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.913 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.913 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.913 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.913 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.913 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.913 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.913 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.913 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.913 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.913 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.913 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.913 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:26:21.913 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.913 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:21.913 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:21.913 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:21.914 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWIyZTAwY2U4ODEyMDdlMzQ4NmM3MjJkYmNjYTA0ZmRmZDYyY2M3NTAyYzQ0ODMxOWYxMDUzNDBjYmQyMTgwM9hyp4w=: 00:26:21.914 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:21.914 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:21.914 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:21.914 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWIyZTAwY2U4ODEyMDdlMzQ4NmM3MjJkYmNjYTA0ZmRmZDYyY2M3NTAyYzQ0ODMxOWYxMDUzNDBjYmQyMTgwM9hyp4w=: 00:26:21.914 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:21.914 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:26:21.914 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.914 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:21.914 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:21.914 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:21.914 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.914 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:21.914 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.914 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.914 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.914 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.914 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:21.914 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:21.914 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:21.914 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.914 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.914 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:21.914 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.914 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:21.914 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:21.914 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:21.914 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:21.914 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.914 18:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.173 nvme0n1 00:26:22.173 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.173 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.173 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.173 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.173 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.173 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.432 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.432 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.432 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.432 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.432 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.432 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:22.432 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.432 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:26:22.432 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.432 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:22.432 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:22.432 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:22.432 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmZlZGQ4ZjgzYTc5YjQwNDBmMTZkNDU4YTc4NDdlNTP17YJy: 00:26:22.432 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTUwNGI3NGY3NjVmZTIwZDJhNjQ0ODQ3YjkwNjE2MWNiZjQ4YzBhYzhjODY2YjllOTkxYzY2N2M1ZjQ3NGRkZBMSeHA=: 00:26:22.432 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:22.432 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:22.432 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmZlZGQ4ZjgzYTc5YjQwNDBmMTZkNDU4YTc4NDdlNTP17YJy: 00:26:22.432 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTUwNGI3NGY3NjVmZTIwZDJhNjQ0ODQ3YjkwNjE2MWNiZjQ4YzBhYzhjODY2YjllOTkxYzY2N2M1ZjQ3NGRkZBMSeHA=: ]] 00:26:22.432 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTUwNGI3NGY3NjVmZTIwZDJhNjQ0ODQ3YjkwNjE2MWNiZjQ4YzBhYzhjODY2YjllOTkxYzY2N2M1ZjQ3NGRkZBMSeHA=: 00:26:22.432 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:26:22.432 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.432 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:22.432 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:22.432 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:22.432 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.432 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:22.432 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.432 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.432 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.432 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.432 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:22.432 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:22.432 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:22.432 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.432 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.432 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:22.432 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.432 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:22.432 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:22.432 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:22.432 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:22.432 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.432 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.998 nvme0n1 00:26:22.998 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.998 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.998 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.998 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.998 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.998 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.998 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.998 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.998 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.998 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.998 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.998 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.998 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:26:22.998 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.998 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:22.998 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:22.998 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:22.998 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzQwMWY4ZGIzMmQwOGMyZDg2MWViNTFiN2RjNDZmMjY3ZjdmNzY1NGMxZGEzNGFkY0VPNQ==: 00:26:22.998 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTE5MzYwMTE5ODg5ZTQ0NzZlOWE0NDhmMmE2MzU0ZjJmYTU3MjI1MTg4Y2MwODM5LkR7+Q==: 00:26:22.998 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:22.998 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:22.998 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzQwMWY4ZGIzMmQwOGMyZDg2MWViNTFiN2RjNDZmMjY3ZjdmNzY1NGMxZGEzNGFkY0VPNQ==: 00:26:22.998 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTE5MzYwMTE5ODg5ZTQ0NzZlOWE0NDhmMmE2MzU0ZjJmYTU3MjI1MTg4Y2MwODM5LkR7+Q==: ]] 00:26:22.998 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTE5MzYwMTE5ODg5ZTQ0NzZlOWE0NDhmMmE2MzU0ZjJmYTU3MjI1MTg4Y2MwODM5LkR7+Q==: 00:26:22.998 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:26:22.998 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.998 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:22.998 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:22.998 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:22.998 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.999 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:22.999 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.999 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.999 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.999 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.999 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:22.999 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:22.999 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:22.999 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.999 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.999 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:22.999 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.999 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:22.999 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:22.999 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:22.999 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:22.999 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.999 18:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.565 nvme0n1 00:26:23.565 18:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.565 18:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.565 18:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.565 18:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.565 18:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.565 18:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.823 18:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.823 18:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.823 18:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.823 18:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.823 18:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.823 18:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.823 18:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:26:23.823 18:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.823 18:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:23.823 18:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:23.823 18:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:23.823 18:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmFhYjA4YmU0OWVhY2E2ZjRmM2ZjYjRlYTk4YTI5M2N4L+7k: 00:26:23.823 18:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzgyMmMxNTA1YmYxYzc1MmYxZGEyZDM3MDQ0NTQxNzjEDxuC: 00:26:23.823 18:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:23.823 18:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:23.823 18:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmFhYjA4YmU0OWVhY2E2ZjRmM2ZjYjRlYTk4YTI5M2N4L+7k: 00:26:23.823 18:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzgyMmMxNTA1YmYxYzc1MmYxZGEyZDM3MDQ0NTQxNzjEDxuC: ]] 00:26:23.823 18:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzgyMmMxNTA1YmYxYzc1MmYxZGEyZDM3MDQ0NTQxNzjEDxuC: 00:26:23.823 18:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:26:23.823 18:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.823 18:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:23.823 18:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:23.823 18:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:23.823 18:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.823 18:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:23.823 18:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.823 18:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.823 18:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.823 18:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.823 18:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:23.823 18:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:23.823 18:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:23.823 18:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.823 18:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.823 18:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:23.823 18:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.823 18:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:23.823 18:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:23.823 18:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:23.823 18:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:23.823 18:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.823 18:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.391 nvme0n1 00:26:24.391 18:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.391 18:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.391 18:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.391 18:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.391 18:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.391 18:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.391 18:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.391 18:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.391 18:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.391 18:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.391 18:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.391 18:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.391 18:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:26:24.391 18:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.391 18:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:24.391 18:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:24.391 18:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:24.391 18:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDAyNzIzYmZlYzQ5OTM0MDUzMWM4MWYyYTU4MDNiYjQ5NjM4ODkzNDJjMzFkOWFjFhhVaw==: 00:26:24.391 18:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDBkNWU2MThlMzE4ZDg3ZTBiZDM0MWU1ODEzM2I5YjEBNVUV: 00:26:24.391 18:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:24.391 18:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:24.391 18:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDAyNzIzYmZlYzQ5OTM0MDUzMWM4MWYyYTU4MDNiYjQ5NjM4ODkzNDJjMzFkOWFjFhhVaw==: 00:26:24.391 18:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDBkNWU2MThlMzE4ZDg3ZTBiZDM0MWU1ODEzM2I5YjEBNVUV: ]] 00:26:24.391 18:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDBkNWU2MThlMzE4ZDg3ZTBiZDM0MWU1ODEzM2I5YjEBNVUV: 00:26:24.391 18:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:26:24.391 18:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.391 18:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:24.391 18:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:24.391 18:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:24.391 18:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.391 18:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:24.391 18:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.391 18:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.391 18:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.391 18:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.391 18:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:24.391 18:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:24.391 18:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:24.391 18:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.391 18:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.391 18:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:24.391 18:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.391 18:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:24.391 18:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:24.391 18:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:24.391 18:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:24.391 18:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.391 18:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.958 nvme0n1 00:26:24.958 18:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.958 18:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.958 18:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.958 18:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.958 18:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.958 18:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.217 18:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.217 18:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.217 18:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.217 18:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.217 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.217 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:25.217 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:26:25.217 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:25.217 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:25.217 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:25.217 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:25.217 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWIyZTAwY2U4ODEyMDdlMzQ4NmM3MjJkYmNjYTA0ZmRmZDYyY2M3NTAyYzQ0ODMxOWYxMDUzNDBjYmQyMTgwM9hyp4w=: 00:26:25.217 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:25.217 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:25.217 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:25.217 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWIyZTAwY2U4ODEyMDdlMzQ4NmM3MjJkYmNjYTA0ZmRmZDYyY2M3NTAyYzQ0ODMxOWYxMDUzNDBjYmQyMTgwM9hyp4w=: 00:26:25.217 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:25.217 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:26:25.217 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:25.217 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:25.217 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:25.217 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:25.217 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:25.217 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:25.217 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.217 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.217 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.217 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:25.217 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:25.217 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:25.217 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:25.217 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.217 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.217 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:25.217 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:25.217 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:25.217 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:25.217 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:25.217 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:25.217 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.217 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.784 nvme0n1 00:26:25.784 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.784 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.784 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:25.784 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.784 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.784 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.784 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.784 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.784 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.784 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.784 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.784 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:25.784 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:25.784 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:25.784 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:26:25.784 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:25.784 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:25.784 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:25.784 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:25.784 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmZlZGQ4ZjgzYTc5YjQwNDBmMTZkNDU4YTc4NDdlNTP17YJy: 00:26:25.784 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTUwNGI3NGY3NjVmZTIwZDJhNjQ0ODQ3YjkwNjE2MWNiZjQ4YzBhYzhjODY2YjllOTkxYzY2N2M1ZjQ3NGRkZBMSeHA=: 00:26:25.784 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:25.784 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:25.784 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmZlZGQ4ZjgzYTc5YjQwNDBmMTZkNDU4YTc4NDdlNTP17YJy: 00:26:25.784 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTUwNGI3NGY3NjVmZTIwZDJhNjQ0ODQ3YjkwNjE2MWNiZjQ4YzBhYzhjODY2YjllOTkxYzY2N2M1ZjQ3NGRkZBMSeHA=: ]] 00:26:25.784 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTUwNGI3NGY3NjVmZTIwZDJhNjQ0ODQ3YjkwNjE2MWNiZjQ4YzBhYzhjODY2YjllOTkxYzY2N2M1ZjQ3NGRkZBMSeHA=: 00:26:25.784 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:26:25.784 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:25.784 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:25.784 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:25.784 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:25.784 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:25.784 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:25.784 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.784 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.784 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.784 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:25.784 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:25.784 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:25.784 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:25.784 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.784 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.784 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:25.784 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:25.784 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:25.784 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:25.784 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:25.784 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:25.784 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.784 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.042 nvme0n1 00:26:26.042 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.042 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.042 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.042 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.042 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.042 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.042 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.042 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.042 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.042 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.042 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.042 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.042 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:26:26.042 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.042 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:26.042 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:26.042 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:26.042 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzQwMWY4ZGIzMmQwOGMyZDg2MWViNTFiN2RjNDZmMjY3ZjdmNzY1NGMxZGEzNGFkY0VPNQ==: 00:26:26.042 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTE5MzYwMTE5ODg5ZTQ0NzZlOWE0NDhmMmE2MzU0ZjJmYTU3MjI1MTg4Y2MwODM5LkR7+Q==: 00:26:26.042 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:26.042 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:26.042 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzQwMWY4ZGIzMmQwOGMyZDg2MWViNTFiN2RjNDZmMjY3ZjdmNzY1NGMxZGEzNGFkY0VPNQ==: 00:26:26.042 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTE5MzYwMTE5ODg5ZTQ0NzZlOWE0NDhmMmE2MzU0ZjJmYTU3MjI1MTg4Y2MwODM5LkR7+Q==: ]] 00:26:26.042 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTE5MzYwMTE5ODg5ZTQ0NzZlOWE0NDhmMmE2MzU0ZjJmYTU3MjI1MTg4Y2MwODM5LkR7+Q==: 00:26:26.042 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:26:26.042 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.042 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:26.043 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:26.043 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:26.043 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.043 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:26.043 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.043 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.043 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.043 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.043 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:26.043 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:26.043 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:26.043 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.043 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.043 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:26.043 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.043 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:26.043 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:26.043 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:26.043 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:26.043 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.043 18:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.043 nvme0n1 00:26:26.043 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.043 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.043 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.043 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.043 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.043 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.301 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.301 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.301 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.301 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.301 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.301 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.301 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:26:26.301 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.301 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:26.301 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:26.301 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:26.301 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmFhYjA4YmU0OWVhY2E2ZjRmM2ZjYjRlYTk4YTI5M2N4L+7k: 00:26:26.301 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzgyMmMxNTA1YmYxYzc1MmYxZGEyZDM3MDQ0NTQxNzjEDxuC: 00:26:26.301 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:26.301 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:26.301 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmFhYjA4YmU0OWVhY2E2ZjRmM2ZjYjRlYTk4YTI5M2N4L+7k: 00:26:26.301 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzgyMmMxNTA1YmYxYzc1MmYxZGEyZDM3MDQ0NTQxNzjEDxuC: ]] 00:26:26.301 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzgyMmMxNTA1YmYxYzc1MmYxZGEyZDM3MDQ0NTQxNzjEDxuC: 00:26:26.301 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:26:26.301 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.301 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:26.301 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:26.301 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:26.301 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.301 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:26.301 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.301 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.301 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.301 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.301 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:26.301 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:26.301 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:26.301 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.301 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.301 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:26.301 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.301 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:26.301 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:26.301 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:26.301 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:26.301 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.301 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.301 nvme0n1 00:26:26.301 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.302 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.302 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.302 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.302 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.302 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.302 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.302 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.302 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.302 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.302 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.302 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.302 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:26:26.302 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.302 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:26.302 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:26.302 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:26.302 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDAyNzIzYmZlYzQ5OTM0MDUzMWM4MWYyYTU4MDNiYjQ5NjM4ODkzNDJjMzFkOWFjFhhVaw==: 00:26:26.302 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDBkNWU2MThlMzE4ZDg3ZTBiZDM0MWU1ODEzM2I5YjEBNVUV: 00:26:26.302 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:26.302 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:26.302 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDAyNzIzYmZlYzQ5OTM0MDUzMWM4MWYyYTU4MDNiYjQ5NjM4ODkzNDJjMzFkOWFjFhhVaw==: 00:26:26.302 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDBkNWU2MThlMzE4ZDg3ZTBiZDM0MWU1ODEzM2I5YjEBNVUV: ]] 00:26:26.302 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDBkNWU2MThlMzE4ZDg3ZTBiZDM0MWU1ODEzM2I5YjEBNVUV: 00:26:26.302 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:26:26.302 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.302 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:26.302 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:26.302 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:26.302 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.302 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:26.302 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.302 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.302 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.302 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.302 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:26.302 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:26.302 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:26.302 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.302 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.302 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:26.302 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.302 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:26.302 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:26.302 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:26.302 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:26.302 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.302 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.560 nvme0n1 00:26:26.560 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.560 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.560 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.560 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.560 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.560 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.560 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.560 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.560 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.560 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.560 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.560 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.560 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:26:26.560 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.560 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:26.560 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:26.560 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:26.560 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWIyZTAwY2U4ODEyMDdlMzQ4NmM3MjJkYmNjYTA0ZmRmZDYyY2M3NTAyYzQ0ODMxOWYxMDUzNDBjYmQyMTgwM9hyp4w=: 00:26:26.560 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:26.560 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:26.560 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:26.560 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWIyZTAwY2U4ODEyMDdlMzQ4NmM3MjJkYmNjYTA0ZmRmZDYyY2M3NTAyYzQ0ODMxOWYxMDUzNDBjYmQyMTgwM9hyp4w=: 00:26:26.560 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:26.560 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:26:26.560 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.560 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:26.560 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:26.560 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:26.560 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.560 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:26.560 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.560 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.560 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.560 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.560 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:26.560 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:26.560 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:26.560 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.560 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.560 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:26.560 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.560 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:26.560 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:26.560 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:26.560 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:26.560 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.560 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.818 nvme0n1 00:26:26.818 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.818 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.818 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.818 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.818 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.818 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.818 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.818 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.818 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.818 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.818 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.818 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:26.818 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.818 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:26:26.818 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.818 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:26.818 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:26.818 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:26.818 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmZlZGQ4ZjgzYTc5YjQwNDBmMTZkNDU4YTc4NDdlNTP17YJy: 00:26:26.818 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTUwNGI3NGY3NjVmZTIwZDJhNjQ0ODQ3YjkwNjE2MWNiZjQ4YzBhYzhjODY2YjllOTkxYzY2N2M1ZjQ3NGRkZBMSeHA=: 00:26:26.818 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:26.818 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:26.818 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmZlZGQ4ZjgzYTc5YjQwNDBmMTZkNDU4YTc4NDdlNTP17YJy: 00:26:26.818 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTUwNGI3NGY3NjVmZTIwZDJhNjQ0ODQ3YjkwNjE2MWNiZjQ4YzBhYzhjODY2YjllOTkxYzY2N2M1ZjQ3NGRkZBMSeHA=: ]] 00:26:26.818 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTUwNGI3NGY3NjVmZTIwZDJhNjQ0ODQ3YjkwNjE2MWNiZjQ4YzBhYzhjODY2YjllOTkxYzY2N2M1ZjQ3NGRkZBMSeHA=: 00:26:26.818 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:26:26.818 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.818 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:26.818 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:26.818 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:26.818 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.818 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:26.818 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.818 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.818 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.818 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.818 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:26.819 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:26.819 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:26.819 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.819 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.819 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:26.819 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.819 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:26.819 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:26.819 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:26.819 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:26.819 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.819 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.819 nvme0n1 00:26:26.819 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.819 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.819 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.819 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.819 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.819 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.077 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.077 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.077 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.077 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.077 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.077 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.077 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:26:27.077 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.077 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:27.077 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:27.077 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:27.077 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzQwMWY4ZGIzMmQwOGMyZDg2MWViNTFiN2RjNDZmMjY3ZjdmNzY1NGMxZGEzNGFkY0VPNQ==: 00:26:27.077 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTE5MzYwMTE5ODg5ZTQ0NzZlOWE0NDhmMmE2MzU0ZjJmYTU3MjI1MTg4Y2MwODM5LkR7+Q==: 00:26:27.077 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:27.077 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:27.077 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzQwMWY4ZGIzMmQwOGMyZDg2MWViNTFiN2RjNDZmMjY3ZjdmNzY1NGMxZGEzNGFkY0VPNQ==: 00:26:27.077 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTE5MzYwMTE5ODg5ZTQ0NzZlOWE0NDhmMmE2MzU0ZjJmYTU3MjI1MTg4Y2MwODM5LkR7+Q==: ]] 00:26:27.077 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTE5MzYwMTE5ODg5ZTQ0NzZlOWE0NDhmMmE2MzU0ZjJmYTU3MjI1MTg4Y2MwODM5LkR7+Q==: 00:26:27.077 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:26:27.077 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.077 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:27.077 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:27.077 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:27.077 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.077 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:27.077 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.077 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.077 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.077 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.077 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:27.077 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:27.077 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:27.077 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.078 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.078 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:27.078 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.078 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:27.078 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:27.078 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:27.078 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:27.078 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.078 18:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.078 nvme0n1 00:26:27.078 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.078 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.078 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.078 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.078 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.078 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.078 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.078 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.078 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.078 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.078 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.078 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.078 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:26:27.078 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.078 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:27.078 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:27.078 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:27.078 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmFhYjA4YmU0OWVhY2E2ZjRmM2ZjYjRlYTk4YTI5M2N4L+7k: 00:26:27.078 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzgyMmMxNTA1YmYxYzc1MmYxZGEyZDM3MDQ0NTQxNzjEDxuC: 00:26:27.078 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:27.078 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:27.078 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmFhYjA4YmU0OWVhY2E2ZjRmM2ZjYjRlYTk4YTI5M2N4L+7k: 00:26:27.078 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzgyMmMxNTA1YmYxYzc1MmYxZGEyZDM3MDQ0NTQxNzjEDxuC: ]] 00:26:27.078 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzgyMmMxNTA1YmYxYzc1MmYxZGEyZDM3MDQ0NTQxNzjEDxuC: 00:26:27.078 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:26:27.078 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.078 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:27.078 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:27.078 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:27.078 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.078 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:27.078 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.078 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.335 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.335 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.335 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:27.335 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:27.335 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:27.335 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.335 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.335 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:27.335 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.335 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:27.335 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:27.335 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:27.335 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:27.335 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.335 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.335 nvme0n1 00:26:27.335 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.335 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.335 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.335 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.335 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.336 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.336 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.336 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.336 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.336 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.336 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.336 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.336 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:26:27.336 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.336 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:27.336 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:27.336 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:27.336 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDAyNzIzYmZlYzQ5OTM0MDUzMWM4MWYyYTU4MDNiYjQ5NjM4ODkzNDJjMzFkOWFjFhhVaw==: 00:26:27.336 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDBkNWU2MThlMzE4ZDg3ZTBiZDM0MWU1ODEzM2I5YjEBNVUV: 00:26:27.336 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:27.336 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:27.336 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDAyNzIzYmZlYzQ5OTM0MDUzMWM4MWYyYTU4MDNiYjQ5NjM4ODkzNDJjMzFkOWFjFhhVaw==: 00:26:27.336 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDBkNWU2MThlMzE4ZDg3ZTBiZDM0MWU1ODEzM2I5YjEBNVUV: ]] 00:26:27.336 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDBkNWU2MThlMzE4ZDg3ZTBiZDM0MWU1ODEzM2I5YjEBNVUV: 00:26:27.336 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:26:27.336 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.336 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:27.336 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:27.336 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:27.336 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.336 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:27.336 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.336 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.336 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.336 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.336 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:27.336 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:27.336 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:27.336 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.336 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.336 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:27.336 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.336 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:27.336 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:27.336 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:27.336 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:27.336 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.336 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.593 nvme0n1 00:26:27.593 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.593 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.593 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.593 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.593 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.593 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.593 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.593 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.593 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.593 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.593 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.593 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.593 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:26:27.593 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.593 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:27.593 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:27.593 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:27.593 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWIyZTAwY2U4ODEyMDdlMzQ4NmM3MjJkYmNjYTA0ZmRmZDYyY2M3NTAyYzQ0ODMxOWYxMDUzNDBjYmQyMTgwM9hyp4w=: 00:26:27.593 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:27.593 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:27.593 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:27.593 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWIyZTAwY2U4ODEyMDdlMzQ4NmM3MjJkYmNjYTA0ZmRmZDYyY2M3NTAyYzQ0ODMxOWYxMDUzNDBjYmQyMTgwM9hyp4w=: 00:26:27.593 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:27.593 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:26:27.593 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.593 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:27.594 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:27.594 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:27.594 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.594 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:27.594 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.594 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.594 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.594 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.594 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:27.594 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:27.594 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:27.594 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.594 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.594 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:27.594 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.594 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:27.594 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:27.594 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:27.594 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:27.594 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.594 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.852 nvme0n1 00:26:27.852 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.852 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.852 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.852 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.852 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.852 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.852 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.852 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.852 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.852 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.852 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.852 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:27.852 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.852 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:26:27.852 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.852 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:27.852 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:27.852 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:27.852 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmZlZGQ4ZjgzYTc5YjQwNDBmMTZkNDU4YTc4NDdlNTP17YJy: 00:26:27.852 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTUwNGI3NGY3NjVmZTIwZDJhNjQ0ODQ3YjkwNjE2MWNiZjQ4YzBhYzhjODY2YjllOTkxYzY2N2M1ZjQ3NGRkZBMSeHA=: 00:26:27.852 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:27.852 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:27.852 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmZlZGQ4ZjgzYTc5YjQwNDBmMTZkNDU4YTc4NDdlNTP17YJy: 00:26:27.852 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTUwNGI3NGY3NjVmZTIwZDJhNjQ0ODQ3YjkwNjE2MWNiZjQ4YzBhYzhjODY2YjllOTkxYzY2N2M1ZjQ3NGRkZBMSeHA=: ]] 00:26:27.852 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTUwNGI3NGY3NjVmZTIwZDJhNjQ0ODQ3YjkwNjE2MWNiZjQ4YzBhYzhjODY2YjllOTkxYzY2N2M1ZjQ3NGRkZBMSeHA=: 00:26:27.852 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:26:27.852 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.852 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:27.852 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:27.852 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:27.852 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.852 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:27.852 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.852 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.852 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.852 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.852 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:27.852 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:27.852 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:27.852 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.852 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.852 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:27.852 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.852 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:27.852 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:27.852 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:27.852 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:27.852 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.852 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.110 nvme0n1 00:26:28.111 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.111 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.111 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:28.111 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.111 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.111 18:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.111 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.111 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.111 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.111 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.111 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.111 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:28.111 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:26:28.111 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:28.111 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:28.111 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:28.111 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:28.111 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzQwMWY4ZGIzMmQwOGMyZDg2MWViNTFiN2RjNDZmMjY3ZjdmNzY1NGMxZGEzNGFkY0VPNQ==: 00:26:28.111 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTE5MzYwMTE5ODg5ZTQ0NzZlOWE0NDhmMmE2MzU0ZjJmYTU3MjI1MTg4Y2MwODM5LkR7+Q==: 00:26:28.111 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:28.111 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:28.111 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzQwMWY4ZGIzMmQwOGMyZDg2MWViNTFiN2RjNDZmMjY3ZjdmNzY1NGMxZGEzNGFkY0VPNQ==: 00:26:28.111 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTE5MzYwMTE5ODg5ZTQ0NzZlOWE0NDhmMmE2MzU0ZjJmYTU3MjI1MTg4Y2MwODM5LkR7+Q==: ]] 00:26:28.111 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTE5MzYwMTE5ODg5ZTQ0NzZlOWE0NDhmMmE2MzU0ZjJmYTU3MjI1MTg4Y2MwODM5LkR7+Q==: 00:26:28.111 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:26:28.111 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:28.111 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:28.111 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:28.111 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:28.111 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:28.111 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:28.111 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.111 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.111 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.111 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:28.111 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:28.111 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:28.111 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:28.111 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.111 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.111 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:28.111 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.111 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:28.111 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:28.111 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:28.111 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:28.111 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.111 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.369 nvme0n1 00:26:28.369 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.369 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.369 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.369 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:28.369 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.369 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.369 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.369 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.369 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.369 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.369 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.369 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:28.369 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:26:28.369 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:28.369 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:28.369 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:28.369 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:28.369 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmFhYjA4YmU0OWVhY2E2ZjRmM2ZjYjRlYTk4YTI5M2N4L+7k: 00:26:28.369 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzgyMmMxNTA1YmYxYzc1MmYxZGEyZDM3MDQ0NTQxNzjEDxuC: 00:26:28.369 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:28.369 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:28.369 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmFhYjA4YmU0OWVhY2E2ZjRmM2ZjYjRlYTk4YTI5M2N4L+7k: 00:26:28.369 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzgyMmMxNTA1YmYxYzc1MmYxZGEyZDM3MDQ0NTQxNzjEDxuC: ]] 00:26:28.369 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzgyMmMxNTA1YmYxYzc1MmYxZGEyZDM3MDQ0NTQxNzjEDxuC: 00:26:28.369 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:26:28.369 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:28.369 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:28.369 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:28.369 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:28.369 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:28.369 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:28.369 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.369 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.369 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.369 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:28.369 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:28.369 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:28.369 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:28.369 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.369 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.369 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:28.369 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.369 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:28.369 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:28.369 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:28.369 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:28.369 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.369 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.627 nvme0n1 00:26:28.627 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.627 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.627 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.627 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.627 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:28.627 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.627 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.627 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.627 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.627 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.627 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.627 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:28.627 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:26:28.627 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:28.627 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:28.627 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:28.627 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:28.627 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDAyNzIzYmZlYzQ5OTM0MDUzMWM4MWYyYTU4MDNiYjQ5NjM4ODkzNDJjMzFkOWFjFhhVaw==: 00:26:28.627 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDBkNWU2MThlMzE4ZDg3ZTBiZDM0MWU1ODEzM2I5YjEBNVUV: 00:26:28.627 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:28.627 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:28.627 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDAyNzIzYmZlYzQ5OTM0MDUzMWM4MWYyYTU4MDNiYjQ5NjM4ODkzNDJjMzFkOWFjFhhVaw==: 00:26:28.627 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDBkNWU2MThlMzE4ZDg3ZTBiZDM0MWU1ODEzM2I5YjEBNVUV: ]] 00:26:28.627 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDBkNWU2MThlMzE4ZDg3ZTBiZDM0MWU1ODEzM2I5YjEBNVUV: 00:26:28.627 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:26:28.627 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:28.627 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:28.627 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:28.627 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:28.627 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:28.627 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:28.627 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.627 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.627 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.627 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:28.627 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:28.628 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:28.628 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:28.628 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.628 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.628 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:28.628 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.628 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:28.628 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:28.628 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:28.628 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:28.628 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.628 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.886 nvme0n1 00:26:28.886 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.886 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.886 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.886 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:28.886 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.886 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.886 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.886 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.886 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.886 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.886 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.886 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:28.886 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:26:28.886 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:28.886 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:28.886 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:28.886 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:28.886 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWIyZTAwY2U4ODEyMDdlMzQ4NmM3MjJkYmNjYTA0ZmRmZDYyY2M3NTAyYzQ0ODMxOWYxMDUzNDBjYmQyMTgwM9hyp4w=: 00:26:28.886 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:28.886 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:28.886 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:28.886 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWIyZTAwY2U4ODEyMDdlMzQ4NmM3MjJkYmNjYTA0ZmRmZDYyY2M3NTAyYzQ0ODMxOWYxMDUzNDBjYmQyMTgwM9hyp4w=: 00:26:28.886 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:28.886 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:26:28.886 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:28.886 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:28.886 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:28.886 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:28.886 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:28.886 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:28.887 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.887 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.887 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.887 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:28.887 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:28.887 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:28.887 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:28.887 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.887 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.887 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:28.887 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.887 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:28.887 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:28.887 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:28.887 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:28.887 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.887 18:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.145 nvme0n1 00:26:29.145 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.145 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.145 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.145 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.145 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.145 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.146 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.146 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.146 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.146 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.146 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.146 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:29.146 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.146 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:26:29.146 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.146 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:29.146 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:29.146 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:29.146 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmZlZGQ4ZjgzYTc5YjQwNDBmMTZkNDU4YTc4NDdlNTP17YJy: 00:26:29.146 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTUwNGI3NGY3NjVmZTIwZDJhNjQ0ODQ3YjkwNjE2MWNiZjQ4YzBhYzhjODY2YjllOTkxYzY2N2M1ZjQ3NGRkZBMSeHA=: 00:26:29.146 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:29.146 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:29.146 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmZlZGQ4ZjgzYTc5YjQwNDBmMTZkNDU4YTc4NDdlNTP17YJy: 00:26:29.146 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTUwNGI3NGY3NjVmZTIwZDJhNjQ0ODQ3YjkwNjE2MWNiZjQ4YzBhYzhjODY2YjllOTkxYzY2N2M1ZjQ3NGRkZBMSeHA=: ]] 00:26:29.146 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTUwNGI3NGY3NjVmZTIwZDJhNjQ0ODQ3YjkwNjE2MWNiZjQ4YzBhYzhjODY2YjllOTkxYzY2N2M1ZjQ3NGRkZBMSeHA=: 00:26:29.146 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:26:29.146 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.146 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:29.146 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:29.146 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:29.146 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.146 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:29.146 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.146 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.404 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.404 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.404 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:29.404 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:29.404 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:29.404 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.404 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.404 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:29.404 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.404 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:29.404 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:29.404 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:29.404 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:29.404 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.404 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.663 nvme0n1 00:26:29.663 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.663 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.663 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.663 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.663 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.663 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.663 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.663 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.663 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.663 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.663 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.663 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.663 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:26:29.663 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.663 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:29.663 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:29.663 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:29.663 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzQwMWY4ZGIzMmQwOGMyZDg2MWViNTFiN2RjNDZmMjY3ZjdmNzY1NGMxZGEzNGFkY0VPNQ==: 00:26:29.663 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTE5MzYwMTE5ODg5ZTQ0NzZlOWE0NDhmMmE2MzU0ZjJmYTU3MjI1MTg4Y2MwODM5LkR7+Q==: 00:26:29.663 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:29.663 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:29.663 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzQwMWY4ZGIzMmQwOGMyZDg2MWViNTFiN2RjNDZmMjY3ZjdmNzY1NGMxZGEzNGFkY0VPNQ==: 00:26:29.663 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTE5MzYwMTE5ODg5ZTQ0NzZlOWE0NDhmMmE2MzU0ZjJmYTU3MjI1MTg4Y2MwODM5LkR7+Q==: ]] 00:26:29.663 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTE5MzYwMTE5ODg5ZTQ0NzZlOWE0NDhmMmE2MzU0ZjJmYTU3MjI1MTg4Y2MwODM5LkR7+Q==: 00:26:29.663 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:26:29.663 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.663 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:29.663 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:29.663 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:29.663 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.663 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:29.663 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.663 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.663 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.663 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.663 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:29.663 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:29.663 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:29.663 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.663 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.663 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:29.663 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.663 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:29.663 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:29.663 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:29.663 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:29.663 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.663 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.230 nvme0n1 00:26:30.230 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.230 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.230 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:30.230 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.230 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.230 18:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.230 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.230 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.230 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.230 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.230 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.230 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:30.230 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:26:30.230 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:30.230 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:30.230 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:30.230 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:30.230 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmFhYjA4YmU0OWVhY2E2ZjRmM2ZjYjRlYTk4YTI5M2N4L+7k: 00:26:30.231 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzgyMmMxNTA1YmYxYzc1MmYxZGEyZDM3MDQ0NTQxNzjEDxuC: 00:26:30.231 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:30.231 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:30.231 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmFhYjA4YmU0OWVhY2E2ZjRmM2ZjYjRlYTk4YTI5M2N4L+7k: 00:26:30.231 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzgyMmMxNTA1YmYxYzc1MmYxZGEyZDM3MDQ0NTQxNzjEDxuC: ]] 00:26:30.231 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzgyMmMxNTA1YmYxYzc1MmYxZGEyZDM3MDQ0NTQxNzjEDxuC: 00:26:30.231 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:26:30.231 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:30.231 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:30.231 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:30.231 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:30.231 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:30.231 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:30.231 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.231 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.231 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.231 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:30.231 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:30.231 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:30.231 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:30.231 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.231 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.231 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:30.231 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.231 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:30.231 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:30.231 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:30.231 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:30.231 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.231 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.489 nvme0n1 00:26:30.489 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.489 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:30.489 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.489 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.489 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.489 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.489 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.489 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.489 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.489 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.489 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.489 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:30.489 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:26:30.489 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:30.489 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:30.489 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:30.489 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:30.489 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDAyNzIzYmZlYzQ5OTM0MDUzMWM4MWYyYTU4MDNiYjQ5NjM4ODkzNDJjMzFkOWFjFhhVaw==: 00:26:30.489 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDBkNWU2MThlMzE4ZDg3ZTBiZDM0MWU1ODEzM2I5YjEBNVUV: 00:26:30.489 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:30.489 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:30.489 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDAyNzIzYmZlYzQ5OTM0MDUzMWM4MWYyYTU4MDNiYjQ5NjM4ODkzNDJjMzFkOWFjFhhVaw==: 00:26:30.489 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDBkNWU2MThlMzE4ZDg3ZTBiZDM0MWU1ODEzM2I5YjEBNVUV: ]] 00:26:30.489 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDBkNWU2MThlMzE4ZDg3ZTBiZDM0MWU1ODEzM2I5YjEBNVUV: 00:26:30.489 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:26:30.489 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:30.489 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:30.489 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:30.489 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:30.489 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:30.489 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:30.489 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.489 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.746 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.746 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:30.746 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:30.746 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:30.746 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:30.747 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.747 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.747 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:30.747 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.747 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:30.747 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:30.747 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:30.747 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:30.747 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.747 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.004 nvme0n1 00:26:31.004 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.004 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.004 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.004 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.004 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.004 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.004 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.004 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.004 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.004 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.004 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.004 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:31.004 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:26:31.004 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.004 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:31.004 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:31.004 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:31.004 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWIyZTAwY2U4ODEyMDdlMzQ4NmM3MjJkYmNjYTA0ZmRmZDYyY2M3NTAyYzQ0ODMxOWYxMDUzNDBjYmQyMTgwM9hyp4w=: 00:26:31.004 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:31.004 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:31.004 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:31.004 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWIyZTAwY2U4ODEyMDdlMzQ4NmM3MjJkYmNjYTA0ZmRmZDYyY2M3NTAyYzQ0ODMxOWYxMDUzNDBjYmQyMTgwM9hyp4w=: 00:26:31.004 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:31.004 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:26:31.004 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:31.004 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:31.004 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:31.004 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:31.004 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:31.004 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:31.004 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.004 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.004 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.004 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.004 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:31.004 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:31.004 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:31.004 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.004 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.004 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:31.004 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.004 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:31.004 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:31.004 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:31.004 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:31.004 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.005 18:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.570 nvme0n1 00:26:31.570 18:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.570 18:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.570 18:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.570 18:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.570 18:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.570 18:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.570 18:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.570 18:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.570 18:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.570 18:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.570 18:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.570 18:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:31.570 18:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:31.570 18:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:26:31.570 18:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.570 18:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:31.570 18:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:31.570 18:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:31.570 18:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmZlZGQ4ZjgzYTc5YjQwNDBmMTZkNDU4YTc4NDdlNTP17YJy: 00:26:31.570 18:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTUwNGI3NGY3NjVmZTIwZDJhNjQ0ODQ3YjkwNjE2MWNiZjQ4YzBhYzhjODY2YjllOTkxYzY2N2M1ZjQ3NGRkZBMSeHA=: 00:26:31.570 18:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:31.570 18:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:31.570 18:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmZlZGQ4ZjgzYTc5YjQwNDBmMTZkNDU4YTc4NDdlNTP17YJy: 00:26:31.570 18:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTUwNGI3NGY3NjVmZTIwZDJhNjQ0ODQ3YjkwNjE2MWNiZjQ4YzBhYzhjODY2YjllOTkxYzY2N2M1ZjQ3NGRkZBMSeHA=: ]] 00:26:31.570 18:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTUwNGI3NGY3NjVmZTIwZDJhNjQ0ODQ3YjkwNjE2MWNiZjQ4YzBhYzhjODY2YjllOTkxYzY2N2M1ZjQ3NGRkZBMSeHA=: 00:26:31.570 18:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:26:31.570 18:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:31.570 18:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:31.570 18:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:31.570 18:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:31.570 18:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:31.570 18:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:31.570 18:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.570 18:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.570 18:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.570 18:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.570 18:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:31.570 18:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:31.570 18:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:31.570 18:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.570 18:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.570 18:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:31.570 18:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.570 18:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:31.570 18:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:31.570 18:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:31.570 18:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:31.570 18:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.570 18:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.135 nvme0n1 00:26:32.135 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.135 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.135 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.135 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:32.135 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.135 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.135 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.135 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.135 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.135 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.393 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.393 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.393 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:26:32.393 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.393 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:32.393 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:32.393 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:32.393 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzQwMWY4ZGIzMmQwOGMyZDg2MWViNTFiN2RjNDZmMjY3ZjdmNzY1NGMxZGEzNGFkY0VPNQ==: 00:26:32.393 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTE5MzYwMTE5ODg5ZTQ0NzZlOWE0NDhmMmE2MzU0ZjJmYTU3MjI1MTg4Y2MwODM5LkR7+Q==: 00:26:32.393 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:32.393 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:32.393 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzQwMWY4ZGIzMmQwOGMyZDg2MWViNTFiN2RjNDZmMjY3ZjdmNzY1NGMxZGEzNGFkY0VPNQ==: 00:26:32.393 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTE5MzYwMTE5ODg5ZTQ0NzZlOWE0NDhmMmE2MzU0ZjJmYTU3MjI1MTg4Y2MwODM5LkR7+Q==: ]] 00:26:32.393 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTE5MzYwMTE5ODg5ZTQ0NzZlOWE0NDhmMmE2MzU0ZjJmYTU3MjI1MTg4Y2MwODM5LkR7+Q==: 00:26:32.393 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:26:32.393 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.393 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:32.393 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:32.393 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:32.393 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.393 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:32.393 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.393 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.393 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.393 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.393 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:32.393 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:32.393 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:32.393 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.393 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.393 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:32.393 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.393 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:32.393 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:32.393 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:32.393 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:32.393 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.393 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.958 nvme0n1 00:26:32.958 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.958 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.958 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:32.958 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.958 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.958 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.958 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.958 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.958 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.958 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.958 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.958 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.958 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:26:32.958 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.958 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:32.958 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:32.958 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:32.958 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmFhYjA4YmU0OWVhY2E2ZjRmM2ZjYjRlYTk4YTI5M2N4L+7k: 00:26:32.958 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzgyMmMxNTA1YmYxYzc1MmYxZGEyZDM3MDQ0NTQxNzjEDxuC: 00:26:32.958 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:32.958 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:32.958 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmFhYjA4YmU0OWVhY2E2ZjRmM2ZjYjRlYTk4YTI5M2N4L+7k: 00:26:32.958 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzgyMmMxNTA1YmYxYzc1MmYxZGEyZDM3MDQ0NTQxNzjEDxuC: ]] 00:26:32.958 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzgyMmMxNTA1YmYxYzc1MmYxZGEyZDM3MDQ0NTQxNzjEDxuC: 00:26:32.958 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:26:32.958 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.958 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:32.958 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:32.958 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:32.958 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.958 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:32.958 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.958 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.958 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.958 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.958 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:32.958 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:32.958 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:32.958 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.958 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.958 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:32.958 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.958 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:32.958 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:32.958 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:32.958 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:32.958 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.958 18:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.893 nvme0n1 00:26:33.893 18:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.893 18:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.893 18:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.893 18:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.893 18:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.893 18:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.893 18:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.893 18:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.893 18:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.893 18:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.893 18:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.893 18:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.893 18:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:26:33.893 18:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.893 18:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:33.893 18:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:33.893 18:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:33.893 18:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDAyNzIzYmZlYzQ5OTM0MDUzMWM4MWYyYTU4MDNiYjQ5NjM4ODkzNDJjMzFkOWFjFhhVaw==: 00:26:33.893 18:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDBkNWU2MThlMzE4ZDg3ZTBiZDM0MWU1ODEzM2I5YjEBNVUV: 00:26:33.893 18:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:33.893 18:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:33.893 18:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDAyNzIzYmZlYzQ5OTM0MDUzMWM4MWYyYTU4MDNiYjQ5NjM4ODkzNDJjMzFkOWFjFhhVaw==: 00:26:33.893 18:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDBkNWU2MThlMzE4ZDg3ZTBiZDM0MWU1ODEzM2I5YjEBNVUV: ]] 00:26:33.893 18:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDBkNWU2MThlMzE4ZDg3ZTBiZDM0MWU1ODEzM2I5YjEBNVUV: 00:26:33.893 18:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:26:33.893 18:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.893 18:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:33.894 18:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:33.894 18:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:33.894 18:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.894 18:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:33.894 18:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.894 18:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.894 18:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.894 18:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.894 18:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:33.894 18:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:33.894 18:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:33.894 18:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.894 18:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.894 18:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:33.894 18:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.894 18:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:33.894 18:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:33.894 18:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:33.894 18:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:33.894 18:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.894 18:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.486 nvme0n1 00:26:34.486 18:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.486 18:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.486 18:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.486 18:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.486 18:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.486 18:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.486 18:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.486 18:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.486 18:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.486 18:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.486 18:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.486 18:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.486 18:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:26:34.486 18:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.486 18:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:34.486 18:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:34.486 18:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:34.486 18:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWIyZTAwY2U4ODEyMDdlMzQ4NmM3MjJkYmNjYTA0ZmRmZDYyY2M3NTAyYzQ0ODMxOWYxMDUzNDBjYmQyMTgwM9hyp4w=: 00:26:34.486 18:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:34.486 18:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:34.486 18:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:34.486 18:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWIyZTAwY2U4ODEyMDdlMzQ4NmM3MjJkYmNjYTA0ZmRmZDYyY2M3NTAyYzQ0ODMxOWYxMDUzNDBjYmQyMTgwM9hyp4w=: 00:26:34.486 18:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:34.486 18:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:26:34.486 18:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.486 18:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:34.486 18:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:34.486 18:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:34.486 18:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.486 18:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:34.486 18:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.486 18:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.486 18:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.486 18:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.486 18:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:34.486 18:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:34.486 18:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:34.486 18:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.486 18:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.486 18:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:34.486 18:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.486 18:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:34.486 18:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:34.486 18:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:34.486 18:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:34.486 18:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.486 18:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.053 nvme0n1 00:26:35.053 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.053 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.053 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.053 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.053 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:35.053 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.053 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.053 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.053 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.053 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.311 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.311 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:35.311 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.311 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:35.311 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:35.311 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:35.311 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzQwMWY4ZGIzMmQwOGMyZDg2MWViNTFiN2RjNDZmMjY3ZjdmNzY1NGMxZGEzNGFkY0VPNQ==: 00:26:35.311 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTE5MzYwMTE5ODg5ZTQ0NzZlOWE0NDhmMmE2MzU0ZjJmYTU3MjI1MTg4Y2MwODM5LkR7+Q==: 00:26:35.311 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:35.311 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:35.311 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzQwMWY4ZGIzMmQwOGMyZDg2MWViNTFiN2RjNDZmMjY3ZjdmNzY1NGMxZGEzNGFkY0VPNQ==: 00:26:35.311 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTE5MzYwMTE5ODg5ZTQ0NzZlOWE0NDhmMmE2MzU0ZjJmYTU3MjI1MTg4Y2MwODM5LkR7+Q==: ]] 00:26:35.311 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTE5MzYwMTE5ODg5ZTQ0NzZlOWE0NDhmMmE2MzU0ZjJmYTU3MjI1MTg4Y2MwODM5LkR7+Q==: 00:26:35.311 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:35.311 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.311 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.311 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.311 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:26:35.311 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:35.311 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:35.311 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:35.311 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.311 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.311 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:35.311 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:35.311 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:35.311 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:35.311 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:35.311 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:35.311 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:26:35.311 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:35.311 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:35.311 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:35.311 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:35.311 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.312 request: 00:26:35.312 { 00:26:35.312 "name": "nvme0", 00:26:35.312 "trtype": "tcp", 00:26:35.312 "traddr": "10.0.0.1", 00:26:35.312 "adrfam": "ipv4", 00:26:35.312 "trsvcid": "4420", 00:26:35.312 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:35.312 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:35.312 "prchk_reftag": false, 00:26:35.312 "prchk_guard": false, 00:26:35.312 "hdgst": false, 00:26:35.312 "ddgst": false, 00:26:35.312 "method": "bdev_nvme_attach_controller", 00:26:35.312 "req_id": 1 00:26:35.312 } 00:26:35.312 Got JSON-RPC error response 00:26:35.312 response: 00:26:35.312 { 00:26:35.312 "code": -5, 00:26:35.312 "message": "Input/output error" 00:26:35.312 } 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.312 request: 00:26:35.312 { 00:26:35.312 "name": "nvme0", 00:26:35.312 "trtype": "tcp", 00:26:35.312 "traddr": "10.0.0.1", 00:26:35.312 "adrfam": "ipv4", 00:26:35.312 "trsvcid": "4420", 00:26:35.312 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:35.312 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:35.312 "prchk_reftag": false, 00:26:35.312 "prchk_guard": false, 00:26:35.312 "hdgst": false, 00:26:35.312 "ddgst": false, 00:26:35.312 "dhchap_key": "key2", 00:26:35.312 "method": "bdev_nvme_attach_controller", 00:26:35.312 "req_id": 1 00:26:35.312 } 00:26:35.312 Got JSON-RPC error response 00:26:35.312 response: 00:26:35.312 { 00:26:35.312 "code": -5, 00:26:35.312 "message": "Input/output error" 00:26:35.312 } 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.312 request: 00:26:35.312 { 00:26:35.312 "name": "nvme0", 00:26:35.312 "trtype": "tcp", 00:26:35.312 "traddr": "10.0.0.1", 00:26:35.312 "adrfam": "ipv4", 00:26:35.312 "trsvcid": "4420", 00:26:35.312 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:35.312 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:35.312 "prchk_reftag": false, 00:26:35.312 "prchk_guard": false, 00:26:35.312 "hdgst": false, 00:26:35.312 "ddgst": false, 00:26:35.312 "dhchap_key": "key1", 00:26:35.312 "dhchap_ctrlr_key": "ckey2", 00:26:35.312 "method": "bdev_nvme_attach_controller", 00:26:35.312 "req_id": 1 00:26:35.312 } 00:26:35.312 Got JSON-RPC error response 00:26:35.312 response: 00:26:35.312 { 00:26:35.312 "code": -5, 00:26:35.312 "message": "Input/output error" 00:26:35.312 } 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:26:35.312 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:35.571 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:35.571 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:35.571 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:26:35.571 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:26:35.571 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:26:35.571 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:35.571 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:26:35.571 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:35.571 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:26:35.571 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:35.571 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:35.571 rmmod nvme_tcp 00:26:35.571 rmmod nvme_fabrics 00:26:35.571 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:35.571 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:26:35.571 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:26:35.571 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 85585 ']' 00:26:35.571 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 85585 00:26:35.571 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 85585 ']' 00:26:35.571 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 85585 00:26:35.571 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:26:35.571 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:35.571 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85585 00:26:35.571 killing process with pid 85585 00:26:35.571 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:35.571 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:35.571 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85585' 00:26:35.571 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 85585 00:26:35.571 18:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 85585 00:26:36.505 18:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:36.505 18:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:36.505 18:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:36.505 18:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:36.505 18:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:36.505 18:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:36.505 18:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:36.505 18:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:36.763 18:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:36.763 18:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:36.763 18:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:36.763 18:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:26:36.763 18:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:26:36.763 18:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:26:36.763 18:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:36.763 18:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:36.763 18:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:36.763 18:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:36.763 18:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:26:36.763 18:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:26:36.763 18:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:37.330 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:37.330 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:26:37.588 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:26:37.588 18:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.hDp /tmp/spdk.key-null.Q4Q /tmp/spdk.key-sha256.Gud /tmp/spdk.key-sha384.VPc /tmp/spdk.key-sha512.gbN /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:26:37.588 18:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:37.846 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:37.846 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:26:37.846 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:26:37.846 00:26:37.846 real 0m37.902s 00:26:37.846 user 0m33.722s 00:26:37.846 sys 0m4.123s 00:26:37.846 ************************************ 00:26:37.846 END TEST nvmf_auth_host 00:26:37.846 ************************************ 00:26:37.846 18:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:37.846 18:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.106 18:33:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:26:38.106 18:33:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:26:38.106 18:33:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:38.106 18:33:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:38.106 18:33:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:38.106 18:33:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.106 ************************************ 00:26:38.106 START TEST nvmf_digest 00:26:38.106 ************************************ 00:26:38.106 18:33:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:38.106 * Looking for test storage... 00:26:38.106 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=1e224894-a0fc-4112-b81b-a37606f50c96 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:38.106 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:38.107 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:38.107 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:38.107 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:38.107 Cannot find device "nvmf_tgt_br" 00:26:38.107 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # true 00:26:38.107 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:38.107 Cannot find device "nvmf_tgt_br2" 00:26:38.107 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # true 00:26:38.107 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:38.107 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:38.107 Cannot find device "nvmf_tgt_br" 00:26:38.107 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # true 00:26:38.107 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:38.107 Cannot find device "nvmf_tgt_br2" 00:26:38.107 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # true 00:26:38.107 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:38.365 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:38.365 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:38.365 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:38.365 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:26:38.365 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:38.365 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:38.365 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:26:38.365 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:38.365 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:38.365 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:38.365 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:38.365 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:38.365 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:38.365 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:38.365 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:38.365 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:38.365 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:38.365 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:38.365 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:38.365 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:38.365 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:38.365 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:38.365 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:38.365 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:38.365 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:38.365 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:38.365 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:38.365 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:38.365 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:38.365 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:38.365 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:38.365 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:38.365 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:26:38.365 00:26:38.365 --- 10.0.0.2 ping statistics --- 00:26:38.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:38.365 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:26:38.365 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:38.365 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:38.365 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:26:38.365 00:26:38.365 --- 10.0.0.3 ping statistics --- 00:26:38.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:38.365 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:26:38.365 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:38.365 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:38.365 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:26:38.365 00:26:38.365 --- 10.0.0.1 ping statistics --- 00:26:38.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:38.365 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:26:38.365 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:38.365 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:26:38.365 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:38.365 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:38.365 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:38.365 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:38.365 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:38.365 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:38.365 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:38.365 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:38.365 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:26:38.365 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:26:38.365 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:38.365 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:38.365 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:38.623 ************************************ 00:26:38.623 START TEST nvmf_digest_clean 00:26:38.623 ************************************ 00:26:38.623 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:26:38.623 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:26:38.623 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:38.623 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:26:38.623 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:38.623 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:38.623 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:38.623 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:38.623 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:38.623 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=87170 00:26:38.623 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:38.623 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 87170 00:26:38.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:38.623 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 87170 ']' 00:26:38.623 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:38.623 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:38.623 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:38.623 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:38.623 18:33:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:38.623 [2024-07-22 18:33:50.510269] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:26:38.623 [2024-07-22 18:33:50.510437] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:38.881 [2024-07-22 18:33:50.692150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:39.138 [2024-07-22 18:33:50.990735] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:39.138 [2024-07-22 18:33:50.990811] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:39.138 [2024-07-22 18:33:50.990846] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:39.138 [2024-07-22 18:33:50.990861] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:39.138 [2024-07-22 18:33:50.990873] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:39.138 [2024-07-22 18:33:50.990927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:39.705 18:33:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:39.705 18:33:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:39.705 18:33:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:39.705 18:33:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:39.705 18:33:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:39.705 18:33:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:39.705 18:33:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:26:39.705 18:33:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:26:39.705 18:33:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:26:39.705 18:33:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.705 18:33:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:39.705 [2024-07-22 18:33:51.693958] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:26:39.962 null0 00:26:39.962 [2024-07-22 18:33:51.823933] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:39.962 [2024-07-22 18:33:51.848163] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:39.962 18:33:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.962 18:33:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:39.962 18:33:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:39.962 18:33:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:39.962 18:33:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:39.962 18:33:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:39.962 18:33:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:39.962 18:33:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:39.963 18:33:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=87202 00:26:39.963 18:33:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:39.963 18:33:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 87202 /var/tmp/bperf.sock 00:26:39.963 18:33:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 87202 ']' 00:26:39.963 18:33:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:39.963 18:33:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:39.963 18:33:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:39.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:39.963 18:33:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:39.963 18:33:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:39.963 [2024-07-22 18:33:51.969187] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:26:39.963 [2024-07-22 18:33:51.969373] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87202 ] 00:26:40.220 [2024-07-22 18:33:52.153233] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:40.478 [2024-07-22 18:33:52.429336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:41.069 18:33:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:41.069 18:33:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:41.069 18:33:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:41.069 18:33:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:41.069 18:33:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:41.634 [2024-07-22 18:33:53.398850] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:26:41.634 18:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:41.634 18:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:41.892 nvme0n1 00:26:41.892 18:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:41.892 18:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:42.149 Running I/O for 2 seconds... 00:26:44.048 00:26:44.048 Latency(us) 00:26:44.048 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:44.048 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:44.048 nvme0n1 : 2.01 12154.03 47.48 0.00 0.00 10522.94 9830.40 22163.08 00:26:44.048 =================================================================================================================== 00:26:44.048 Total : 12154.03 47.48 0.00 0.00 10522.94 9830.40 22163.08 00:26:44.048 0 00:26:44.048 18:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:44.048 18:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:44.048 18:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:44.048 18:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:44.048 18:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:44.048 | select(.opcode=="crc32c") 00:26:44.048 | "\(.module_name) \(.executed)"' 00:26:44.306 18:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:44.306 18:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:44.306 18:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:44.306 18:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:44.306 18:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 87202 00:26:44.306 18:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 87202 ']' 00:26:44.306 18:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 87202 00:26:44.306 18:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:44.306 18:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:44.306 18:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87202 00:26:44.306 18:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:44.306 killing process with pid 87202 00:26:44.306 18:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:44.306 18:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87202' 00:26:44.306 18:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 87202 00:26:44.306 Received shutdown signal, test time was about 2.000000 seconds 00:26:44.306 00:26:44.306 Latency(us) 00:26:44.306 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:44.306 =================================================================================================================== 00:26:44.306 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:44.306 18:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 87202 00:26:45.681 18:33:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:45.681 18:33:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:45.681 18:33:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:45.681 18:33:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:45.681 18:33:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:45.681 18:33:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:45.681 18:33:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:45.681 18:33:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=87276 00:26:45.682 18:33:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 87276 /var/tmp/bperf.sock 00:26:45.682 18:33:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 87276 ']' 00:26:45.682 18:33:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:45.682 18:33:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:45.682 18:33:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:45.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:45.682 18:33:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:45.682 18:33:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:45.682 18:33:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:45.682 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:45.682 Zero copy mechanism will not be used. 00:26:45.682 [2024-07-22 18:33:57.499505] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:26:45.682 [2024-07-22 18:33:57.499687] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87276 ] 00:26:45.682 [2024-07-22 18:33:57.671522] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:45.940 [2024-07-22 18:33:57.910415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:46.518 18:33:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:46.518 18:33:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:46.518 18:33:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:46.518 18:33:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:46.518 18:33:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:47.106 [2024-07-22 18:33:58.839306] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:26:47.106 18:33:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:47.106 18:33:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:47.365 nvme0n1 00:26:47.365 18:33:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:47.365 18:33:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:47.365 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:47.365 Zero copy mechanism will not be used. 00:26:47.365 Running I/O for 2 seconds... 00:26:49.911 00:26:49.911 Latency(us) 00:26:49.911 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:49.911 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:49.911 nvme0n1 : 2.00 6078.11 759.76 0.00 0.00 2628.30 2398.02 4051.32 00:26:49.911 =================================================================================================================== 00:26:49.911 Total : 6078.11 759.76 0.00 0.00 2628.30 2398.02 4051.32 00:26:49.911 0 00:26:49.911 18:34:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:49.911 18:34:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:49.911 18:34:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:49.911 18:34:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:49.911 18:34:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:49.911 | select(.opcode=="crc32c") 00:26:49.911 | "\(.module_name) \(.executed)"' 00:26:49.911 18:34:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:49.911 18:34:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:49.911 18:34:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:49.911 18:34:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:49.912 18:34:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 87276 00:26:49.912 18:34:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 87276 ']' 00:26:49.912 18:34:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 87276 00:26:49.912 18:34:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:49.912 18:34:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:49.912 18:34:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87276 00:26:49.912 killing process with pid 87276 00:26:49.912 Received shutdown signal, test time was about 2.000000 seconds 00:26:49.912 00:26:49.912 Latency(us) 00:26:49.912 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:49.912 =================================================================================================================== 00:26:49.912 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:49.912 18:34:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:49.912 18:34:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:49.912 18:34:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87276' 00:26:49.912 18:34:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 87276 00:26:49.912 18:34:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 87276 00:26:51.286 18:34:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:51.286 18:34:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:51.286 18:34:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:51.286 18:34:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:51.286 18:34:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:51.286 18:34:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:51.286 18:34:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:51.286 18:34:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=87343 00:26:51.286 18:34:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:51.286 18:34:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 87343 /var/tmp/bperf.sock 00:26:51.286 18:34:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 87343 ']' 00:26:51.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:51.286 18:34:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:51.286 18:34:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:51.286 18:34:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:51.286 18:34:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:51.286 18:34:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:51.286 [2024-07-22 18:34:03.073360] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:26:51.286 [2024-07-22 18:34:03.073546] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87343 ] 00:26:51.286 [2024-07-22 18:34:03.252653] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:51.544 [2024-07-22 18:34:03.518265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:52.149 18:34:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:52.149 18:34:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:52.149 18:34:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:52.149 18:34:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:52.149 18:34:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:52.407 [2024-07-22 18:34:04.384252] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:26:52.665 18:34:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:52.665 18:34:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:52.923 nvme0n1 00:26:52.923 18:34:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:52.923 18:34:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:53.182 Running I/O for 2 seconds... 00:26:55.107 00:26:55.107 Latency(us) 00:26:55.107 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:55.107 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:55.107 nvme0n1 : 2.01 12722.59 49.70 0.00 0.00 10050.20 3038.49 21328.99 00:26:55.107 =================================================================================================================== 00:26:55.107 Total : 12722.59 49.70 0.00 0.00 10050.20 3038.49 21328.99 00:26:55.107 0 00:26:55.107 18:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:55.107 18:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:55.107 18:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:55.107 18:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:55.107 18:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:55.107 | select(.opcode=="crc32c") 00:26:55.107 | "\(.module_name) \(.executed)"' 00:26:55.366 18:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:55.366 18:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:55.366 18:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:55.366 18:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:55.366 18:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 87343 00:26:55.366 18:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 87343 ']' 00:26:55.366 18:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 87343 00:26:55.366 18:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:55.366 18:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:55.366 18:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87343 00:26:55.366 18:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:55.366 killing process with pid 87343 00:26:55.366 18:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:55.366 18:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87343' 00:26:55.366 Received shutdown signal, test time was about 2.000000 seconds 00:26:55.366 00:26:55.366 Latency(us) 00:26:55.366 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:55.366 =================================================================================================================== 00:26:55.366 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:55.366 18:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 87343 00:26:55.366 18:34:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 87343 00:26:56.738 18:34:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:56.738 18:34:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:56.738 18:34:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:56.738 18:34:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:56.738 18:34:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:56.738 18:34:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:56.738 18:34:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:56.738 18:34:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=87414 00:26:56.738 18:34:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:56.738 18:34:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 87414 /var/tmp/bperf.sock 00:26:56.738 18:34:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 87414 ']' 00:26:56.738 18:34:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:56.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:56.738 18:34:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:56.738 18:34:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:56.738 18:34:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:56.738 18:34:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:56.738 [2024-07-22 18:34:08.533072] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:26:56.738 [2024-07-22 18:34:08.533558] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87414 ] 00:26:56.738 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:56.738 Zero copy mechanism will not be used. 00:26:56.738 [2024-07-22 18:34:08.711452] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:56.996 [2024-07-22 18:34:09.003319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:57.559 18:34:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:57.559 18:34:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:57.559 18:34:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:57.559 18:34:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:57.559 18:34:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:58.141 [2024-07-22 18:34:09.970526] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:26:58.141 18:34:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:58.141 18:34:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:58.707 nvme0n1 00:26:58.707 18:34:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:58.707 18:34:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:58.707 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:58.707 Zero copy mechanism will not be used. 00:26:58.707 Running I/O for 2 seconds... 00:27:00.626 00:27:00.626 Latency(us) 00:27:00.626 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:00.626 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:00.626 nvme0n1 : 2.00 4858.53 607.32 0.00 0.00 3284.84 2695.91 12392.26 00:27:00.626 =================================================================================================================== 00:27:00.626 Total : 4858.53 607.32 0.00 0.00 3284.84 2695.91 12392.26 00:27:00.626 0 00:27:00.626 18:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:00.626 18:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:00.626 18:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:00.626 18:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:00.626 18:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:00.626 | select(.opcode=="crc32c") 00:27:00.626 | "\(.module_name) \(.executed)"' 00:27:00.884 18:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:00.884 18:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:00.884 18:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:00.884 18:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:00.884 18:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 87414 00:27:00.884 18:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 87414 ']' 00:27:00.884 18:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 87414 00:27:00.884 18:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:27:00.884 18:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:00.884 18:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87414 00:27:01.143 killing process with pid 87414 00:27:01.143 Received shutdown signal, test time was about 2.000000 seconds 00:27:01.143 00:27:01.143 Latency(us) 00:27:01.143 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:01.143 =================================================================================================================== 00:27:01.143 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:01.143 18:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:01.143 18:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:01.143 18:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87414' 00:27:01.143 18:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 87414 00:27:01.143 18:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 87414 00:27:02.517 18:34:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 87170 00:27:02.518 18:34:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 87170 ']' 00:27:02.518 18:34:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 87170 00:27:02.518 18:34:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:27:02.518 18:34:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:02.518 18:34:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87170 00:27:02.518 killing process with pid 87170 00:27:02.518 18:34:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:02.518 18:34:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:02.518 18:34:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87170' 00:27:02.518 18:34:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 87170 00:27:02.518 18:34:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 87170 00:27:03.452 ************************************ 00:27:03.452 END TEST nvmf_digest_clean 00:27:03.452 ************************************ 00:27:03.452 00:27:03.452 real 0m24.924s 00:27:03.452 user 0m47.469s 00:27:03.452 sys 0m5.051s 00:27:03.452 18:34:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:03.452 18:34:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:03.452 18:34:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:27:03.452 18:34:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:27:03.452 18:34:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:03.452 18:34:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:03.452 18:34:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:03.452 ************************************ 00:27:03.452 START TEST nvmf_digest_error 00:27:03.452 ************************************ 00:27:03.452 18:34:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:27:03.452 18:34:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:27:03.452 18:34:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:03.452 18:34:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:03.452 18:34:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:03.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:03.452 18:34:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=87525 00:27:03.452 18:34:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:03.452 18:34:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 87525 00:27:03.452 18:34:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 87525 ']' 00:27:03.452 18:34:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:03.452 18:34:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:03.452 18:34:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:03.452 18:34:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:03.452 18:34:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:03.452 [2024-07-22 18:34:15.467504] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:27:03.452 [2024-07-22 18:34:15.467856] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:03.711 [2024-07-22 18:34:15.638055] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:03.974 [2024-07-22 18:34:15.926906] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:03.974 [2024-07-22 18:34:15.927243] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:03.974 [2024-07-22 18:34:15.927430] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:03.974 [2024-07-22 18:34:15.927724] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:03.974 [2024-07-22 18:34:15.927780] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:03.974 [2024-07-22 18:34:15.927998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:04.539 18:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:04.539 18:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:27:04.539 18:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:04.539 18:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:04.539 18:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:04.539 18:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:04.539 18:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:27:04.539 18:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.539 18:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:04.539 [2024-07-22 18:34:16.489128] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:27:04.540 18:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.540 18:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:27:04.540 18:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:27:04.540 18:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.540 18:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:04.798 [2024-07-22 18:34:16.700149] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:27:05.056 null0 00:27:05.056 [2024-07-22 18:34:16.828375] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:05.056 [2024-07-22 18:34:16.852564] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:05.056 18:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.056 18:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:27:05.056 18:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:05.056 18:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:05.056 18:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:05.056 18:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:05.056 18:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=87564 00:27:05.056 18:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:27:05.056 18:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 87564 /var/tmp/bperf.sock 00:27:05.056 18:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 87564 ']' 00:27:05.056 18:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:05.056 18:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:05.056 18:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:05.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:05.056 18:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:05.056 18:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:05.056 [2024-07-22 18:34:16.986845] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:27:05.056 [2024-07-22 18:34:16.987456] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87564 ] 00:27:05.314 [2024-07-22 18:34:17.185667] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:05.572 [2024-07-22 18:34:17.441245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:05.830 [2024-07-22 18:34:17.642396] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:27:06.090 18:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:06.090 18:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:27:06.090 18:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:06.090 18:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:06.348 18:34:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:06.348 18:34:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.348 18:34:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:06.348 18:34:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.348 18:34:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:06.348 18:34:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:06.607 nvme0n1 00:27:06.607 18:34:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:06.608 18:34:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.608 18:34:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:06.608 18:34:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.608 18:34:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:06.608 18:34:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:06.608 Running I/O for 2 seconds... 00:27:06.866 [2024-07-22 18:34:18.661564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:06.866 [2024-07-22 18:34:18.661648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.866 [2024-07-22 18:34:18.661675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.866 [2024-07-22 18:34:18.682757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:06.866 [2024-07-22 18:34:18.682816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.866 [2024-07-22 18:34:18.682843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.866 [2024-07-22 18:34:18.703739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:06.866 [2024-07-22 18:34:18.703801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.866 [2024-07-22 18:34:18.703824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.866 [2024-07-22 18:34:18.724706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:06.866 [2024-07-22 18:34:18.724761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.866 [2024-07-22 18:34:18.724787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.866 [2024-07-22 18:34:18.745844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:06.867 [2024-07-22 18:34:18.745921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.867 [2024-07-22 18:34:18.745945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.867 [2024-07-22 18:34:18.767876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:06.867 [2024-07-22 18:34:18.767940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.867 [2024-07-22 18:34:18.767967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.867 [2024-07-22 18:34:18.790086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:06.867 [2024-07-22 18:34:18.790152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.867 [2024-07-22 18:34:18.790175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.867 [2024-07-22 18:34:18.811599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:06.867 [2024-07-22 18:34:18.811657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.867 [2024-07-22 18:34:18.811683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.867 [2024-07-22 18:34:18.832789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:06.867 [2024-07-22 18:34:18.832855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.867 [2024-07-22 18:34:18.832878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.867 [2024-07-22 18:34:18.853991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:06.867 [2024-07-22 18:34:18.854049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.867 [2024-07-22 18:34:18.854075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.867 [2024-07-22 18:34:18.874961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:06.867 [2024-07-22 18:34:18.875028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.867 [2024-07-22 18:34:18.875050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.126 [2024-07-22 18:34:18.895942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:07.126 [2024-07-22 18:34:18.895997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:8948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.126 [2024-07-22 18:34:18.896022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.126 [2024-07-22 18:34:18.916735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:07.126 [2024-07-22 18:34:18.916795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:10644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.126 [2024-07-22 18:34:18.916817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.126 [2024-07-22 18:34:18.937628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:07.126 [2024-07-22 18:34:18.937683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.126 [2024-07-22 18:34:18.937715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.126 [2024-07-22 18:34:18.959165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:07.126 [2024-07-22 18:34:18.959239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.126 [2024-07-22 18:34:18.959263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.126 [2024-07-22 18:34:18.980024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:07.126 [2024-07-22 18:34:18.980079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:18031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.126 [2024-07-22 18:34:18.980105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.126 [2024-07-22 18:34:19.000934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:07.126 [2024-07-22 18:34:19.000996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:14223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.126 [2024-07-22 18:34:19.001020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.126 [2024-07-22 18:34:19.022488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:07.126 [2024-07-22 18:34:19.022558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.126 [2024-07-22 18:34:19.022584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.126 [2024-07-22 18:34:19.043843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:07.126 [2024-07-22 18:34:19.043911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.126 [2024-07-22 18:34:19.043934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.126 [2024-07-22 18:34:19.064990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:07.126 [2024-07-22 18:34:19.065045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.126 [2024-07-22 18:34:19.065070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.126 [2024-07-22 18:34:19.086072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:07.126 [2024-07-22 18:34:19.086137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.126 [2024-07-22 18:34:19.086160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.126 [2024-07-22 18:34:19.107431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:07.126 [2024-07-22 18:34:19.107495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:10000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.126 [2024-07-22 18:34:19.107523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.126 [2024-07-22 18:34:19.129240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:07.126 [2024-07-22 18:34:19.129308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:12340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.126 [2024-07-22 18:34:19.129332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.385 [2024-07-22 18:34:19.150763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:07.385 [2024-07-22 18:34:19.150820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:24378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.385 [2024-07-22 18:34:19.150846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.385 [2024-07-22 18:34:19.171903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:07.385 [2024-07-22 18:34:19.171966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.385 [2024-07-22 18:34:19.171989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.385 [2024-07-22 18:34:19.192941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:07.385 [2024-07-22 18:34:19.192996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.385 [2024-07-22 18:34:19.193022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.385 [2024-07-22 18:34:19.213851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:07.385 [2024-07-22 18:34:19.213928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.385 [2024-07-22 18:34:19.213951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.385 [2024-07-22 18:34:19.234719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:07.385 [2024-07-22 18:34:19.234772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.385 [2024-07-22 18:34:19.234798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.385 [2024-07-22 18:34:19.255589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:07.385 [2024-07-22 18:34:19.255650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:19646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.385 [2024-07-22 18:34:19.255673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.385 [2024-07-22 18:34:19.276756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:07.385 [2024-07-22 18:34:19.276810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:8011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.385 [2024-07-22 18:34:19.276839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.385 [2024-07-22 18:34:19.297860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:07.385 [2024-07-22 18:34:19.297929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.385 [2024-07-22 18:34:19.297951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.385 [2024-07-22 18:34:19.325312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:07.385 [2024-07-22 18:34:19.325375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:8364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.385 [2024-07-22 18:34:19.325397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.385 [2024-07-22 18:34:19.356125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:07.385 [2024-07-22 18:34:19.356219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.385 [2024-07-22 18:34:19.356244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.385 [2024-07-22 18:34:19.377370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:07.385 [2024-07-22 18:34:19.377423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.385 [2024-07-22 18:34:19.377448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.385 [2024-07-22 18:34:19.398363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:07.385 [2024-07-22 18:34:19.398423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.385 [2024-07-22 18:34:19.398445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.644 [2024-07-22 18:34:19.419383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:07.644 [2024-07-22 18:34:19.419438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:5836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.644 [2024-07-22 18:34:19.419467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.644 [2024-07-22 18:34:19.441021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:07.644 [2024-07-22 18:34:19.441089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.644 [2024-07-22 18:34:19.441111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.644 [2024-07-22 18:34:19.462362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:07.644 [2024-07-22 18:34:19.462416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:11330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.644 [2024-07-22 18:34:19.462442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.644 [2024-07-22 18:34:19.483384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:07.644 [2024-07-22 18:34:19.483449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:15711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.644 [2024-07-22 18:34:19.483472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.644 [2024-07-22 18:34:19.504514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:07.644 [2024-07-22 18:34:19.504569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:1090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.644 [2024-07-22 18:34:19.504595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.644 [2024-07-22 18:34:19.525554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:07.644 [2024-07-22 18:34:19.525614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:13391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.644 [2024-07-22 18:34:19.525636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.644 [2024-07-22 18:34:19.546447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:07.644 [2024-07-22 18:34:19.546500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.644 [2024-07-22 18:34:19.546526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.644 [2024-07-22 18:34:19.567215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:07.644 [2024-07-22 18:34:19.567274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.644 [2024-07-22 18:34:19.567300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.644 [2024-07-22 18:34:19.587989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:07.644 [2024-07-22 18:34:19.588041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:19570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.644 [2024-07-22 18:34:19.588067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.644 [2024-07-22 18:34:19.608847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:07.644 [2024-07-22 18:34:19.608907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:14932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.644 [2024-07-22 18:34:19.608930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.644 [2024-07-22 18:34:19.629666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:07.644 [2024-07-22 18:34:19.629718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.644 [2024-07-22 18:34:19.629744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.644 [2024-07-22 18:34:19.650418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:07.644 [2024-07-22 18:34:19.650477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:18463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.644 [2024-07-22 18:34:19.650499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.903 [2024-07-22 18:34:19.671250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:07.903 [2024-07-22 18:34:19.671303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.903 [2024-07-22 18:34:19.671329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.903 [2024-07-22 18:34:19.692217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:07.903 [2024-07-22 18:34:19.692285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.903 [2024-07-22 18:34:19.692308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.903 [2024-07-22 18:34:19.713416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:07.903 [2024-07-22 18:34:19.713470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.903 [2024-07-22 18:34:19.713496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.903 [2024-07-22 18:34:19.734515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:07.903 [2024-07-22 18:34:19.734575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.903 [2024-07-22 18:34:19.734598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.903 [2024-07-22 18:34:19.755478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:07.903 [2024-07-22 18:34:19.755531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:25365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.903 [2024-07-22 18:34:19.755561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.903 [2024-07-22 18:34:19.776725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:07.903 [2024-07-22 18:34:19.776788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:12557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.903 [2024-07-22 18:34:19.776811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.903 [2024-07-22 18:34:19.798502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:07.903 [2024-07-22 18:34:19.798560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.903 [2024-07-22 18:34:19.798588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.903 [2024-07-22 18:34:19.819905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:07.903 [2024-07-22 18:34:19.819965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.903 [2024-07-22 18:34:19.819988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.903 [2024-07-22 18:34:19.841028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:07.903 [2024-07-22 18:34:19.841086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.903 [2024-07-22 18:34:19.841112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.903 [2024-07-22 18:34:19.862649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:07.903 [2024-07-22 18:34:19.862710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:14007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.903 [2024-07-22 18:34:19.862732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.903 [2024-07-22 18:34:19.883746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:07.903 [2024-07-22 18:34:19.883799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:24823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.903 [2024-07-22 18:34:19.883825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.903 [2024-07-22 18:34:19.904747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:07.903 [2024-07-22 18:34:19.904808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.904 [2024-07-22 18:34:19.904831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.162 [2024-07-22 18:34:19.925632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:08.162 [2024-07-22 18:34:19.925685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:10160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.162 [2024-07-22 18:34:19.925710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.162 [2024-07-22 18:34:19.946585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:08.162 [2024-07-22 18:34:19.946644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:14645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.162 [2024-07-22 18:34:19.946666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.162 [2024-07-22 18:34:19.967492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:08.162 [2024-07-22 18:34:19.967546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:17384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.162 [2024-07-22 18:34:19.967572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.162 [2024-07-22 18:34:19.988517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:08.162 [2024-07-22 18:34:19.988580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.162 [2024-07-22 18:34:19.988602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.162 [2024-07-22 18:34:20.011353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:08.163 [2024-07-22 18:34:20.011416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:25516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.163 [2024-07-22 18:34:20.011446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.163 [2024-07-22 18:34:20.034644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:08.163 [2024-07-22 18:34:20.034714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:22008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.163 [2024-07-22 18:34:20.034737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.163 [2024-07-22 18:34:20.057654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:08.163 [2024-07-22 18:34:20.057712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:2073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.163 [2024-07-22 18:34:20.057738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.163 [2024-07-22 18:34:20.080624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:08.163 [2024-07-22 18:34:20.080691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.163 [2024-07-22 18:34:20.080714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.163 [2024-07-22 18:34:20.103425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:08.163 [2024-07-22 18:34:20.103482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.163 [2024-07-22 18:34:20.103513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.163 [2024-07-22 18:34:20.136037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:08.163 [2024-07-22 18:34:20.136120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.163 [2024-07-22 18:34:20.136170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.163 [2024-07-22 18:34:20.157631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:08.163 [2024-07-22 18:34:20.157696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.163 [2024-07-22 18:34:20.157720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.163 [2024-07-22 18:34:20.178497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:08.163 [2024-07-22 18:34:20.178552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:16730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.163 [2024-07-22 18:34:20.178582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.422 [2024-07-22 18:34:20.199454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:08.422 [2024-07-22 18:34:20.199516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:6823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.422 [2024-07-22 18:34:20.199538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.422 [2024-07-22 18:34:20.220421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:08.422 [2024-07-22 18:34:20.220478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:21010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.422 [2024-07-22 18:34:20.220504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.422 [2024-07-22 18:34:20.241460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:08.422 [2024-07-22 18:34:20.241533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.422 [2024-07-22 18:34:20.241555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.422 [2024-07-22 18:34:20.262398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:08.422 [2024-07-22 18:34:20.262450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:17191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.422 [2024-07-22 18:34:20.262476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.422 [2024-07-22 18:34:20.283324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:08.422 [2024-07-22 18:34:20.283384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.422 [2024-07-22 18:34:20.283406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.422 [2024-07-22 18:34:20.304418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:08.422 [2024-07-22 18:34:20.304472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.422 [2024-07-22 18:34:20.304498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.422 [2024-07-22 18:34:20.325454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:08.422 [2024-07-22 18:34:20.325515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:21456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.422 [2024-07-22 18:34:20.325538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.422 [2024-07-22 18:34:20.346492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:08.422 [2024-07-22 18:34:20.346547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:23753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.422 [2024-07-22 18:34:20.346572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.422 [2024-07-22 18:34:20.367586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:08.422 [2024-07-22 18:34:20.367648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.422 [2024-07-22 18:34:20.367670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.422 [2024-07-22 18:34:20.388665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:08.422 [2024-07-22 18:34:20.388720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:10849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.422 [2024-07-22 18:34:20.388746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.422 [2024-07-22 18:34:20.409798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:08.423 [2024-07-22 18:34:20.409865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:17944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.423 [2024-07-22 18:34:20.409898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.423 [2024-07-22 18:34:20.431546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:08.423 [2024-07-22 18:34:20.431607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:3659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.423 [2024-07-22 18:34:20.431634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.682 [2024-07-22 18:34:20.453276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:08.682 [2024-07-22 18:34:20.453348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.682 [2024-07-22 18:34:20.453372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.682 [2024-07-22 18:34:20.474683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:08.682 [2024-07-22 18:34:20.474741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:19931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.682 [2024-07-22 18:34:20.474768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.682 [2024-07-22 18:34:20.495912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:08.682 [2024-07-22 18:34:20.495977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:17564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.682 [2024-07-22 18:34:20.495999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.682 [2024-07-22 18:34:20.518367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:08.682 [2024-07-22 18:34:20.518447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:20969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.682 [2024-07-22 18:34:20.518479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.682 [2024-07-22 18:34:20.540786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:08.682 [2024-07-22 18:34:20.540859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:24517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.682 [2024-07-22 18:34:20.540883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.682 [2024-07-22 18:34:20.563353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:08.682 [2024-07-22 18:34:20.563441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:23872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.682 [2024-07-22 18:34:20.563466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.682 [2024-07-22 18:34:20.587505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:08.682 [2024-07-22 18:34:20.587591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:15980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.682 [2024-07-22 18:34:20.587616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.682 [2024-07-22 18:34:20.610605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:08.682 [2024-07-22 18:34:20.610703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.682 [2024-07-22 18:34:20.610728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.682 [2024-07-22 18:34:20.633736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:08.682 [2024-07-22 18:34:20.633832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.682 [2024-07-22 18:34:20.633857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.682 00:27:08.682 Latency(us) 00:27:08.682 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:08.682 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:08.682 nvme0n1 : 2.01 11699.78 45.70 0.00 0.00 10930.37 2889.54 35270.28 00:27:08.682 =================================================================================================================== 00:27:08.682 Total : 11699.78 45.70 0.00 0.00 10930.37 2889.54 35270.28 00:27:08.682 0 00:27:08.682 18:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:08.682 18:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:08.682 18:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:08.682 | .driver_specific 00:27:08.682 | .nvme_error 00:27:08.682 | .status_code 00:27:08.682 | .command_transient_transport_error' 00:27:08.682 18:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:09.249 18:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 92 > 0 )) 00:27:09.249 18:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 87564 00:27:09.249 18:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 87564 ']' 00:27:09.249 18:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 87564 00:27:09.249 18:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:27:09.249 18:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:09.249 18:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87564 00:27:09.249 killing process with pid 87564 00:27:09.249 18:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:09.249 18:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:09.249 18:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87564' 00:27:09.249 Received shutdown signal, test time was about 2.000000 seconds 00:27:09.249 00:27:09.249 Latency(us) 00:27:09.249 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:09.249 =================================================================================================================== 00:27:09.249 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:09.249 18:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 87564 00:27:09.249 18:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 87564 00:27:10.185 18:34:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:27:10.185 18:34:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:10.185 18:34:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:10.185 18:34:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:10.185 18:34:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:10.185 18:34:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=87632 00:27:10.185 18:34:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 87632 /var/tmp/bperf.sock 00:27:10.185 18:34:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:27:10.185 18:34:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 87632 ']' 00:27:10.185 18:34:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:10.185 18:34:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:10.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:10.185 18:34:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:10.185 18:34:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:10.185 18:34:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:10.185 [2024-07-22 18:34:22.189756] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:27:10.185 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:10.185 Zero copy mechanism will not be used. 00:27:10.185 [2024-07-22 18:34:22.190221] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87632 ] 00:27:10.444 [2024-07-22 18:34:22.365619] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:10.704 [2024-07-22 18:34:22.606963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:10.962 [2024-07-22 18:34:22.809731] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:27:11.220 18:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:11.220 18:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:27:11.220 18:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:11.220 18:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:11.477 18:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:11.477 18:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.477 18:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:11.477 18:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.477 18:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:11.477 18:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:12.043 nvme0n1 00:27:12.043 18:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:12.043 18:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.043 18:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:12.043 18:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.043 18:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:12.043 18:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:12.043 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:12.043 Zero copy mechanism will not be used. 00:27:12.043 Running I/O for 2 seconds... 00:27:12.043 [2024-07-22 18:34:23.926785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.043 [2024-07-22 18:34:23.926876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.043 [2024-07-22 18:34:23.926903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.043 [2024-07-22 18:34:23.932620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.043 [2024-07-22 18:34:23.932692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.043 [2024-07-22 18:34:23.932716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.043 [2024-07-22 18:34:23.938455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.043 [2024-07-22 18:34:23.938511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.043 [2024-07-22 18:34:23.938538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.044 [2024-07-22 18:34:23.944157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.044 [2024-07-22 18:34:23.944228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.044 [2024-07-22 18:34:23.944260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.044 [2024-07-22 18:34:23.949762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.044 [2024-07-22 18:34:23.949828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.044 [2024-07-22 18:34:23.949852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.044 [2024-07-22 18:34:23.956085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.044 [2024-07-22 18:34:23.956152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.044 [2024-07-22 18:34:23.956176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.044 [2024-07-22 18:34:23.961705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.044 [2024-07-22 18:34:23.961762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.044 [2024-07-22 18:34:23.961788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.044 [2024-07-22 18:34:23.967485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.044 [2024-07-22 18:34:23.967543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.044 [2024-07-22 18:34:23.967570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.044 [2024-07-22 18:34:23.973310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.044 [2024-07-22 18:34:23.973365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.044 [2024-07-22 18:34:23.973397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.044 [2024-07-22 18:34:23.979022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.044 [2024-07-22 18:34:23.979086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.044 [2024-07-22 18:34:23.979110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.044 [2024-07-22 18:34:23.984742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.044 [2024-07-22 18:34:23.984811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.044 [2024-07-22 18:34:23.984843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.044 [2024-07-22 18:34:23.990573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.044 [2024-07-22 18:34:23.990629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.044 [2024-07-22 18:34:23.990659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.044 [2024-07-22 18:34:23.996292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.044 [2024-07-22 18:34:23.996348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.044 [2024-07-22 18:34:23.996377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.044 [2024-07-22 18:34:24.001969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.044 [2024-07-22 18:34:24.002031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.044 [2024-07-22 18:34:24.002059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.044 [2024-07-22 18:34:24.007524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.044 [2024-07-22 18:34:24.007587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.044 [2024-07-22 18:34:24.007610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.044 [2024-07-22 18:34:24.013194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.044 [2024-07-22 18:34:24.013270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.044 [2024-07-22 18:34:24.013293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.044 [2024-07-22 18:34:24.018761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.044 [2024-07-22 18:34:24.018821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.044 [2024-07-22 18:34:24.018847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.044 [2024-07-22 18:34:24.024464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.044 [2024-07-22 18:34:24.024520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.044 [2024-07-22 18:34:24.024546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.044 [2024-07-22 18:34:24.030058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.044 [2024-07-22 18:34:24.030123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.044 [2024-07-22 18:34:24.030146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.044 [2024-07-22 18:34:24.035883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.044 [2024-07-22 18:34:24.035947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.044 [2024-07-22 18:34:24.035970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.044 [2024-07-22 18:34:24.041603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.044 [2024-07-22 18:34:24.041659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.044 [2024-07-22 18:34:24.041685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.044 [2024-07-22 18:34:24.047280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.044 [2024-07-22 18:34:24.047340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.044 [2024-07-22 18:34:24.047370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.044 [2024-07-22 18:34:24.052959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.044 [2024-07-22 18:34:24.053015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.044 [2024-07-22 18:34:24.053042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.044 [2024-07-22 18:34:24.058700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.044 [2024-07-22 18:34:24.058766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.044 [2024-07-22 18:34:24.058790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.304 [2024-07-22 18:34:24.064381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.304 [2024-07-22 18:34:24.064443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.304 [2024-07-22 18:34:24.064466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.304 [2024-07-22 18:34:24.069947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.304 [2024-07-22 18:34:24.070012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.304 [2024-07-22 18:34:24.070051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.304 [2024-07-22 18:34:24.075678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.304 [2024-07-22 18:34:24.075734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.304 [2024-07-22 18:34:24.075760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.304 [2024-07-22 18:34:24.081239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.304 [2024-07-22 18:34:24.081301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.304 [2024-07-22 18:34:24.081324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.304 [2024-07-22 18:34:24.086844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.304 [2024-07-22 18:34:24.086910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.304 [2024-07-22 18:34:24.086934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.304 [2024-07-22 18:34:24.092373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.304 [2024-07-22 18:34:24.092432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.304 [2024-07-22 18:34:24.092471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.304 [2024-07-22 18:34:24.097919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.304 [2024-07-22 18:34:24.097974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.304 [2024-07-22 18:34:24.098013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.305 [2024-07-22 18:34:24.103586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.305 [2024-07-22 18:34:24.103645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.305 [2024-07-22 18:34:24.103671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.305 [2024-07-22 18:34:24.109192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.305 [2024-07-22 18:34:24.109286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.305 [2024-07-22 18:34:24.109309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.305 [2024-07-22 18:34:24.114978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.305 [2024-07-22 18:34:24.115043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.305 [2024-07-22 18:34:24.115066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.305 [2024-07-22 18:34:24.120655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.305 [2024-07-22 18:34:24.120711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.305 [2024-07-22 18:34:24.120738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.305 [2024-07-22 18:34:24.126478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.305 [2024-07-22 18:34:24.126534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.305 [2024-07-22 18:34:24.126560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.305 [2024-07-22 18:34:24.132273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.305 [2024-07-22 18:34:24.132335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.305 [2024-07-22 18:34:24.132358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.305 [2024-07-22 18:34:24.138095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.305 [2024-07-22 18:34:24.138159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.305 [2024-07-22 18:34:24.138194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.305 [2024-07-22 18:34:24.143844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.305 [2024-07-22 18:34:24.143930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.305 [2024-07-22 18:34:24.143965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.305 [2024-07-22 18:34:24.149667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.305 [2024-07-22 18:34:24.149734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.305 [2024-07-22 18:34:24.149766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.305 [2024-07-22 18:34:24.155462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.305 [2024-07-22 18:34:24.155517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.305 [2024-07-22 18:34:24.155546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.305 [2024-07-22 18:34:24.161097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.305 [2024-07-22 18:34:24.161161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.305 [2024-07-22 18:34:24.161184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.305 [2024-07-22 18:34:24.166771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.305 [2024-07-22 18:34:24.166837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.305 [2024-07-22 18:34:24.166860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.305 [2024-07-22 18:34:24.172508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.305 [2024-07-22 18:34:24.172564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.305 [2024-07-22 18:34:24.172591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.305 [2024-07-22 18:34:24.178258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.305 [2024-07-22 18:34:24.178317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.305 [2024-07-22 18:34:24.178349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.305 [2024-07-22 18:34:24.184041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.305 [2024-07-22 18:34:24.184123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.305 [2024-07-22 18:34:24.184164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.305 [2024-07-22 18:34:24.190546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.305 [2024-07-22 18:34:24.190630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.305 [2024-07-22 18:34:24.190660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.305 [2024-07-22 18:34:24.196751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.305 [2024-07-22 18:34:24.196827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.305 [2024-07-22 18:34:24.196851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.305 [2024-07-22 18:34:24.202589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.305 [2024-07-22 18:34:24.202693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.305 [2024-07-22 18:34:24.202740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.305 [2024-07-22 18:34:24.208490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.305 [2024-07-22 18:34:24.208551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.305 [2024-07-22 18:34:24.208578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.305 [2024-07-22 18:34:24.214409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.305 [2024-07-22 18:34:24.214466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.305 [2024-07-22 18:34:24.214492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.305 [2024-07-22 18:34:24.220040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.305 [2024-07-22 18:34:24.220104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.305 [2024-07-22 18:34:24.220127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.305 [2024-07-22 18:34:24.225841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.305 [2024-07-22 18:34:24.225923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.305 [2024-07-22 18:34:24.225948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.305 [2024-07-22 18:34:24.231572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.305 [2024-07-22 18:34:24.231692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.305 [2024-07-22 18:34:24.231718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.305 [2024-07-22 18:34:24.237555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.305 [2024-07-22 18:34:24.237622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.305 [2024-07-22 18:34:24.237649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.305 [2024-07-22 18:34:24.243445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.305 [2024-07-22 18:34:24.243518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.305 [2024-07-22 18:34:24.243543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.305 [2024-07-22 18:34:24.249358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.305 [2024-07-22 18:34:24.249423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.305 [2024-07-22 18:34:24.249448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.305 [2024-07-22 18:34:24.255120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.306 [2024-07-22 18:34:24.255189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.306 [2024-07-22 18:34:24.255231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.306 [2024-07-22 18:34:24.260848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.306 [2024-07-22 18:34:24.260905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.306 [2024-07-22 18:34:24.260932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.306 [2024-07-22 18:34:24.266653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.306 [2024-07-22 18:34:24.266709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.306 [2024-07-22 18:34:24.266735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.306 [2024-07-22 18:34:24.272317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.306 [2024-07-22 18:34:24.272382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.306 [2024-07-22 18:34:24.272406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.306 [2024-07-22 18:34:24.278284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.306 [2024-07-22 18:34:24.278348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.306 [2024-07-22 18:34:24.278371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.306 [2024-07-22 18:34:24.284092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.306 [2024-07-22 18:34:24.284149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.306 [2024-07-22 18:34:24.284175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.306 [2024-07-22 18:34:24.289635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.306 [2024-07-22 18:34:24.289705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.306 [2024-07-22 18:34:24.289731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.306 [2024-07-22 18:34:24.295311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.306 [2024-07-22 18:34:24.295367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.306 [2024-07-22 18:34:24.295393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.306 [2024-07-22 18:34:24.300771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.306 [2024-07-22 18:34:24.300838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.306 [2024-07-22 18:34:24.300862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.306 [2024-07-22 18:34:24.306456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.306 [2024-07-22 18:34:24.306523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.306 [2024-07-22 18:34:24.306546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.306 [2024-07-22 18:34:24.312201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.306 [2024-07-22 18:34:24.312268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.306 [2024-07-22 18:34:24.312295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.306 [2024-07-22 18:34:24.317991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.306 [2024-07-22 18:34:24.318054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.306 [2024-07-22 18:34:24.318083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.566 [2024-07-22 18:34:24.323613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.566 [2024-07-22 18:34:24.323675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.566 [2024-07-22 18:34:24.323698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.566 [2024-07-22 18:34:24.329389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.566 [2024-07-22 18:34:24.329450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.566 [2024-07-22 18:34:24.329472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.566 [2024-07-22 18:34:24.335097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.566 [2024-07-22 18:34:24.335163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.566 [2024-07-22 18:34:24.335190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.566 [2024-07-22 18:34:24.340831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.566 [2024-07-22 18:34:24.340887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.566 [2024-07-22 18:34:24.340914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.566 [2024-07-22 18:34:24.346576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.566 [2024-07-22 18:34:24.346634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.566 [2024-07-22 18:34:24.346660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.566 [2024-07-22 18:34:24.352367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.566 [2024-07-22 18:34:24.352430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.566 [2024-07-22 18:34:24.352454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.566 [2024-07-22 18:34:24.358180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.566 [2024-07-22 18:34:24.358261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.566 [2024-07-22 18:34:24.358283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.566 [2024-07-22 18:34:24.363984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.566 [2024-07-22 18:34:24.364041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.566 [2024-07-22 18:34:24.364067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.566 [2024-07-22 18:34:24.369947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.566 [2024-07-22 18:34:24.370003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.566 [2024-07-22 18:34:24.370030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.566 [2024-07-22 18:34:24.375983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.566 [2024-07-22 18:34:24.376046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.566 [2024-07-22 18:34:24.376080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.566 [2024-07-22 18:34:24.381759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.566 [2024-07-22 18:34:24.381825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.566 [2024-07-22 18:34:24.381847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.566 [2024-07-22 18:34:24.387485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.566 [2024-07-22 18:34:24.387549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.566 [2024-07-22 18:34:24.387572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.566 [2024-07-22 18:34:24.393265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.566 [2024-07-22 18:34:24.393321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.566 [2024-07-22 18:34:24.393347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.566 [2024-07-22 18:34:24.398949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.566 [2024-07-22 18:34:24.399019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.566 [2024-07-22 18:34:24.399048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.566 [2024-07-22 18:34:24.404620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.566 [2024-07-22 18:34:24.404699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.566 [2024-07-22 18:34:24.404723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.566 [2024-07-22 18:34:24.410192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.566 [2024-07-22 18:34:24.410265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.567 [2024-07-22 18:34:24.410288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.567 [2024-07-22 18:34:24.415800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.567 [2024-07-22 18:34:24.415876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.567 [2024-07-22 18:34:24.415899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.567 [2024-07-22 18:34:24.421325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.567 [2024-07-22 18:34:24.421381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.567 [2024-07-22 18:34:24.421407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.567 [2024-07-22 18:34:24.426934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.567 [2024-07-22 18:34:24.426988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.567 [2024-07-22 18:34:24.427015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.567 [2024-07-22 18:34:24.432370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.567 [2024-07-22 18:34:24.432430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.567 [2024-07-22 18:34:24.432452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.567 [2024-07-22 18:34:24.437749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.567 [2024-07-22 18:34:24.437812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.567 [2024-07-22 18:34:24.437834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.567 [2024-07-22 18:34:24.443275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.567 [2024-07-22 18:34:24.443329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.567 [2024-07-22 18:34:24.443355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.567 [2024-07-22 18:34:24.448709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.567 [2024-07-22 18:34:24.448763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.567 [2024-07-22 18:34:24.448789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.567 [2024-07-22 18:34:24.454080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.567 [2024-07-22 18:34:24.454141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.567 [2024-07-22 18:34:24.454162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.567 [2024-07-22 18:34:24.459440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.567 [2024-07-22 18:34:24.459503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.567 [2024-07-22 18:34:24.459526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.567 [2024-07-22 18:34:24.464821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.567 [2024-07-22 18:34:24.464890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.567 [2024-07-22 18:34:24.464915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.567 [2024-07-22 18:34:24.470255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.567 [2024-07-22 18:34:24.470308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.567 [2024-07-22 18:34:24.470333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.567 [2024-07-22 18:34:24.475507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.567 [2024-07-22 18:34:24.475567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.567 [2024-07-22 18:34:24.475589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.567 [2024-07-22 18:34:24.480799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.567 [2024-07-22 18:34:24.480860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.567 [2024-07-22 18:34:24.480882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.567 [2024-07-22 18:34:24.486284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.567 [2024-07-22 18:34:24.486343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.567 [2024-07-22 18:34:24.486365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.567 [2024-07-22 18:34:24.492028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.567 [2024-07-22 18:34:24.492090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.567 [2024-07-22 18:34:24.492117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.567 [2024-07-22 18:34:24.497819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.567 [2024-07-22 18:34:24.497900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.567 [2024-07-22 18:34:24.497929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.567 [2024-07-22 18:34:24.503574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.567 [2024-07-22 18:34:24.503638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.567 [2024-07-22 18:34:24.503661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.567 [2024-07-22 18:34:24.509160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.567 [2024-07-22 18:34:24.509242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.567 [2024-07-22 18:34:24.509265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.567 [2024-07-22 18:34:24.514796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.567 [2024-07-22 18:34:24.514851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.567 [2024-07-22 18:34:24.514880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.567 [2024-07-22 18:34:24.520485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.567 [2024-07-22 18:34:24.520539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.567 [2024-07-22 18:34:24.520565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.567 [2024-07-22 18:34:24.526145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.567 [2024-07-22 18:34:24.526199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.567 [2024-07-22 18:34:24.526243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.567 [2024-07-22 18:34:24.531674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.567 [2024-07-22 18:34:24.531738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.567 [2024-07-22 18:34:24.531761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.567 [2024-07-22 18:34:24.537274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.567 [2024-07-22 18:34:24.537333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.567 [2024-07-22 18:34:24.537356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.567 [2024-07-22 18:34:24.542810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.567 [2024-07-22 18:34:24.542864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.567 [2024-07-22 18:34:24.542890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.567 [2024-07-22 18:34:24.548412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.567 [2024-07-22 18:34:24.548466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.567 [2024-07-22 18:34:24.548492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.567 [2024-07-22 18:34:24.554090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.567 [2024-07-22 18:34:24.554151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.567 [2024-07-22 18:34:24.554174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.567 [2024-07-22 18:34:24.559618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.568 [2024-07-22 18:34:24.559685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.568 [2024-07-22 18:34:24.559708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.568 [2024-07-22 18:34:24.565116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.568 [2024-07-22 18:34:24.565170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.568 [2024-07-22 18:34:24.565196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.568 [2024-07-22 18:34:24.570555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.568 [2024-07-22 18:34:24.570610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.568 [2024-07-22 18:34:24.570635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.568 [2024-07-22 18:34:24.576026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.568 [2024-07-22 18:34:24.576086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.568 [2024-07-22 18:34:24.576107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.826 [2024-07-22 18:34:24.581678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.827 [2024-07-22 18:34:24.581740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.827 [2024-07-22 18:34:24.581762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.827 [2024-07-22 18:34:24.587347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.827 [2024-07-22 18:34:24.587409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.827 [2024-07-22 18:34:24.587432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.827 [2024-07-22 18:34:24.592860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.827 [2024-07-22 18:34:24.592915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.827 [2024-07-22 18:34:24.592959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.827 [2024-07-22 18:34:24.598411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.827 [2024-07-22 18:34:24.598610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.827 [2024-07-22 18:34:24.598744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.827 [2024-07-22 18:34:24.604193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.827 [2024-07-22 18:34:24.604303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.827 [2024-07-22 18:34:24.604327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.827 [2024-07-22 18:34:24.609675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.827 [2024-07-22 18:34:24.609740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.827 [2024-07-22 18:34:24.609763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.827 [2024-07-22 18:34:24.615085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.827 [2024-07-22 18:34:24.615146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.827 [2024-07-22 18:34:24.615167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.827 [2024-07-22 18:34:24.620546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.827 [2024-07-22 18:34:24.620600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.827 [2024-07-22 18:34:24.620621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.827 [2024-07-22 18:34:24.626048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.827 [2024-07-22 18:34:24.626103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.827 [2024-07-22 18:34:24.626124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.827 [2024-07-22 18:34:24.631700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.827 [2024-07-22 18:34:24.631754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.827 [2024-07-22 18:34:24.631775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.827 [2024-07-22 18:34:24.637226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.827 [2024-07-22 18:34:24.637279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.827 [2024-07-22 18:34:24.637300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.827 [2024-07-22 18:34:24.642699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.827 [2024-07-22 18:34:24.642752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.827 [2024-07-22 18:34:24.642773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.827 [2024-07-22 18:34:24.648079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.827 [2024-07-22 18:34:24.648135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.827 [2024-07-22 18:34:24.648157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.827 [2024-07-22 18:34:24.653541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.827 [2024-07-22 18:34:24.653597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.827 [2024-07-22 18:34:24.653619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.827 [2024-07-22 18:34:24.658969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.827 [2024-07-22 18:34:24.659024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.827 [2024-07-22 18:34:24.659046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.827 [2024-07-22 18:34:24.664341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.827 [2024-07-22 18:34:24.664394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.827 [2024-07-22 18:34:24.664416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.827 [2024-07-22 18:34:24.669834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.827 [2024-07-22 18:34:24.669925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.827 [2024-07-22 18:34:24.669948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.827 [2024-07-22 18:34:24.675341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.827 [2024-07-22 18:34:24.675395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.827 [2024-07-22 18:34:24.675416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.827 [2024-07-22 18:34:24.680772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.827 [2024-07-22 18:34:24.680826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.827 [2024-07-22 18:34:24.680861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.827 [2024-07-22 18:34:24.686391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.827 [2024-07-22 18:34:24.686445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.827 [2024-07-22 18:34:24.686466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.827 [2024-07-22 18:34:24.691879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.827 [2024-07-22 18:34:24.691931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.827 [2024-07-22 18:34:24.691952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.827 [2024-07-22 18:34:24.697310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.827 [2024-07-22 18:34:24.697363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.827 [2024-07-22 18:34:24.697384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.827 [2024-07-22 18:34:24.702714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.827 [2024-07-22 18:34:24.702767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.827 [2024-07-22 18:34:24.702787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.827 [2024-07-22 18:34:24.708098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.827 [2024-07-22 18:34:24.708153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.827 [2024-07-22 18:34:24.708173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.827 [2024-07-22 18:34:24.713491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.827 [2024-07-22 18:34:24.713545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.827 [2024-07-22 18:34:24.713566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.827 [2024-07-22 18:34:24.718988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.827 [2024-07-22 18:34:24.719042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.827 [2024-07-22 18:34:24.719063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.827 [2024-07-22 18:34:24.724386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.827 [2024-07-22 18:34:24.724440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.828 [2024-07-22 18:34:24.724461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.828 [2024-07-22 18:34:24.729660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.828 [2024-07-22 18:34:24.729714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.828 [2024-07-22 18:34:24.729736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.828 [2024-07-22 18:34:24.735088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.828 [2024-07-22 18:34:24.735143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.828 [2024-07-22 18:34:24.735164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.828 [2024-07-22 18:34:24.740512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.828 [2024-07-22 18:34:24.740581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.828 [2024-07-22 18:34:24.740602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.828 [2024-07-22 18:34:24.745846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.828 [2024-07-22 18:34:24.745910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.828 [2024-07-22 18:34:24.745937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.828 [2024-07-22 18:34:24.751159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.828 [2024-07-22 18:34:24.751231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.828 [2024-07-22 18:34:24.751253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.828 [2024-07-22 18:34:24.756489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.828 [2024-07-22 18:34:24.756542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.828 [2024-07-22 18:34:24.756564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.828 [2024-07-22 18:34:24.761771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.828 [2024-07-22 18:34:24.761825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.828 [2024-07-22 18:34:24.761847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.828 [2024-07-22 18:34:24.767134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.828 [2024-07-22 18:34:24.767188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.828 [2024-07-22 18:34:24.767244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.828 [2024-07-22 18:34:24.772529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.828 [2024-07-22 18:34:24.772584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.828 [2024-07-22 18:34:24.772605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.828 [2024-07-22 18:34:24.778024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.828 [2024-07-22 18:34:24.778078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.828 [2024-07-22 18:34:24.778100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.828 [2024-07-22 18:34:24.783557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.828 [2024-07-22 18:34:24.783610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.828 [2024-07-22 18:34:24.783630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.828 [2024-07-22 18:34:24.788949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.828 [2024-07-22 18:34:24.789004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.828 [2024-07-22 18:34:24.789026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.828 [2024-07-22 18:34:24.794441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.828 [2024-07-22 18:34:24.794493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.828 [2024-07-22 18:34:24.794513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.828 [2024-07-22 18:34:24.799833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.828 [2024-07-22 18:34:24.799886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.828 [2024-07-22 18:34:24.799907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.828 [2024-07-22 18:34:24.805246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.828 [2024-07-22 18:34:24.805299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.828 [2024-07-22 18:34:24.805319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.828 [2024-07-22 18:34:24.810698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.828 [2024-07-22 18:34:24.810763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.828 [2024-07-22 18:34:24.810782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.828 [2024-07-22 18:34:24.816229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.828 [2024-07-22 18:34:24.816313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.828 [2024-07-22 18:34:24.816335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.828 [2024-07-22 18:34:24.821953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.828 [2024-07-22 18:34:24.822007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.828 [2024-07-22 18:34:24.822028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.828 [2024-07-22 18:34:24.827403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.828 [2024-07-22 18:34:24.827477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.828 [2024-07-22 18:34:24.827498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.828 [2024-07-22 18:34:24.832929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.828 [2024-07-22 18:34:24.832983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.828 [2024-07-22 18:34:24.833005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.828 [2024-07-22 18:34:24.838526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:12.828 [2024-07-22 18:34:24.838581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.828 [2024-07-22 18:34:24.838603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.088 [2024-07-22 18:34:24.844071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.088 [2024-07-22 18:34:24.844126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.088 [2024-07-22 18:34:24.844147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.088 [2024-07-22 18:34:24.849667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.088 [2024-07-22 18:34:24.849721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.088 [2024-07-22 18:34:24.849742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.088 [2024-07-22 18:34:24.855251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.088 [2024-07-22 18:34:24.855303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.088 [2024-07-22 18:34:24.855326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.088 [2024-07-22 18:34:24.860766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.088 [2024-07-22 18:34:24.860836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.088 [2024-07-22 18:34:24.860857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.088 [2024-07-22 18:34:24.866363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.088 [2024-07-22 18:34:24.866418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.088 [2024-07-22 18:34:24.866439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.088 [2024-07-22 18:34:24.871960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.088 [2024-07-22 18:34:24.872015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.088 [2024-07-22 18:34:24.872037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.088 [2024-07-22 18:34:24.877531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.088 [2024-07-22 18:34:24.877585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.088 [2024-07-22 18:34:24.877606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.088 [2024-07-22 18:34:24.883139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.088 [2024-07-22 18:34:24.883195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.088 [2024-07-22 18:34:24.883235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.088 [2024-07-22 18:34:24.888695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.088 [2024-07-22 18:34:24.888749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.088 [2024-07-22 18:34:24.888770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.088 [2024-07-22 18:34:24.894249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.088 [2024-07-22 18:34:24.894301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.088 [2024-07-22 18:34:24.894323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.088 [2024-07-22 18:34:24.899723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.088 [2024-07-22 18:34:24.899775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.088 [2024-07-22 18:34:24.899798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.088 [2024-07-22 18:34:24.905132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.088 [2024-07-22 18:34:24.905186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.088 [2024-07-22 18:34:24.905225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.088 [2024-07-22 18:34:24.910572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.088 [2024-07-22 18:34:24.910626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.088 [2024-07-22 18:34:24.910647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.088 [2024-07-22 18:34:24.916046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.089 [2024-07-22 18:34:24.916100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.089 [2024-07-22 18:34:24.916121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.089 [2024-07-22 18:34:24.921470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.089 [2024-07-22 18:34:24.921523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.089 [2024-07-22 18:34:24.921545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.089 [2024-07-22 18:34:24.927042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.089 [2024-07-22 18:34:24.927095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.089 [2024-07-22 18:34:24.927133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.089 [2024-07-22 18:34:24.932531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.089 [2024-07-22 18:34:24.932583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.089 [2024-07-22 18:34:24.932604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.089 [2024-07-22 18:34:24.938117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.089 [2024-07-22 18:34:24.938172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.089 [2024-07-22 18:34:24.938193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.089 [2024-07-22 18:34:24.943552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.089 [2024-07-22 18:34:24.943606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.089 [2024-07-22 18:34:24.943626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.089 [2024-07-22 18:34:24.949035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.089 [2024-07-22 18:34:24.949089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.089 [2024-07-22 18:34:24.949110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.089 [2024-07-22 18:34:24.954514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.089 [2024-07-22 18:34:24.954568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.089 [2024-07-22 18:34:24.954589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.089 [2024-07-22 18:34:24.960044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.089 [2024-07-22 18:34:24.960112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.089 [2024-07-22 18:34:24.960134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.089 [2024-07-22 18:34:24.965649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.089 [2024-07-22 18:34:24.965718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.089 [2024-07-22 18:34:24.965739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.089 [2024-07-22 18:34:24.971147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.089 [2024-07-22 18:34:24.971246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.089 [2024-07-22 18:34:24.971269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.089 [2024-07-22 18:34:24.976652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.089 [2024-07-22 18:34:24.976707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.089 [2024-07-22 18:34:24.976729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.089 [2024-07-22 18:34:24.982100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.089 [2024-07-22 18:34:24.982154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.089 [2024-07-22 18:34:24.982175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.089 [2024-07-22 18:34:24.987430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.089 [2024-07-22 18:34:24.987482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.089 [2024-07-22 18:34:24.987504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.089 [2024-07-22 18:34:24.992853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.089 [2024-07-22 18:34:24.992908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.089 [2024-07-22 18:34:24.992929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.089 [2024-07-22 18:34:24.998357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.089 [2024-07-22 18:34:24.998413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.089 [2024-07-22 18:34:24.998434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.089 [2024-07-22 18:34:25.003733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.089 [2024-07-22 18:34:25.003801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.089 [2024-07-22 18:34:25.003822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.089 [2024-07-22 18:34:25.009058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.089 [2024-07-22 18:34:25.009112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.089 [2024-07-22 18:34:25.009134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.089 [2024-07-22 18:34:25.014455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.089 [2024-07-22 18:34:25.014508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.089 [2024-07-22 18:34:25.014529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.089 [2024-07-22 18:34:25.019970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.089 [2024-07-22 18:34:25.020039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.089 [2024-07-22 18:34:25.020060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.089 [2024-07-22 18:34:25.025392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.089 [2024-07-22 18:34:25.025446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.089 [2024-07-22 18:34:25.025466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.089 [2024-07-22 18:34:25.030855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.089 [2024-07-22 18:34:25.030908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.089 [2024-07-22 18:34:25.030930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.089 [2024-07-22 18:34:25.036265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.089 [2024-07-22 18:34:25.036319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.089 [2024-07-22 18:34:25.036340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.089 [2024-07-22 18:34:25.041697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.089 [2024-07-22 18:34:25.041752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.089 [2024-07-22 18:34:25.041773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.089 [2024-07-22 18:34:25.047096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.089 [2024-07-22 18:34:25.047151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.089 [2024-07-22 18:34:25.047172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.090 [2024-07-22 18:34:25.052482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.090 [2024-07-22 18:34:25.052536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.090 [2024-07-22 18:34:25.052557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.090 [2024-07-22 18:34:25.057914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.090 [2024-07-22 18:34:25.057966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.090 [2024-07-22 18:34:25.057987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.090 [2024-07-22 18:34:25.063411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.090 [2024-07-22 18:34:25.063466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.090 [2024-07-22 18:34:25.063487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.090 [2024-07-22 18:34:25.068791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.090 [2024-07-22 18:34:25.068846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.090 [2024-07-22 18:34:25.068866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.090 [2024-07-22 18:34:25.074161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.090 [2024-07-22 18:34:25.074229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.090 [2024-07-22 18:34:25.074252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.090 [2024-07-22 18:34:25.079520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.090 [2024-07-22 18:34:25.079573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.090 [2024-07-22 18:34:25.079594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.090 [2024-07-22 18:34:25.084837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.090 [2024-07-22 18:34:25.084891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.090 [2024-07-22 18:34:25.084912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.090 [2024-07-22 18:34:25.090157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.090 [2024-07-22 18:34:25.090232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.090 [2024-07-22 18:34:25.090265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.090 [2024-07-22 18:34:25.095498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.090 [2024-07-22 18:34:25.095551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.090 [2024-07-22 18:34:25.095572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.090 [2024-07-22 18:34:25.100881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.090 [2024-07-22 18:34:25.100933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.090 [2024-07-22 18:34:25.100955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.350 [2024-07-22 18:34:25.106324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.350 [2024-07-22 18:34:25.106378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.350 [2024-07-22 18:34:25.106399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.350 [2024-07-22 18:34:25.111675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.350 [2024-07-22 18:34:25.111731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.350 [2024-07-22 18:34:25.111753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.350 [2024-07-22 18:34:25.117119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.350 [2024-07-22 18:34:25.117186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.350 [2024-07-22 18:34:25.117224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.350 [2024-07-22 18:34:25.122658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.350 [2024-07-22 18:34:25.122724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.350 [2024-07-22 18:34:25.122745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.350 [2024-07-22 18:34:25.128194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.350 [2024-07-22 18:34:25.128277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.350 [2024-07-22 18:34:25.128299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.350 [2024-07-22 18:34:25.133582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.350 [2024-07-22 18:34:25.133637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.350 [2024-07-22 18:34:25.133658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.350 [2024-07-22 18:34:25.138995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.350 [2024-07-22 18:34:25.139050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.350 [2024-07-22 18:34:25.139072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.350 [2024-07-22 18:34:25.144523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.350 [2024-07-22 18:34:25.144576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.350 [2024-07-22 18:34:25.144598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.350 [2024-07-22 18:34:25.149987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.350 [2024-07-22 18:34:25.150049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.350 [2024-07-22 18:34:25.150071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.350 [2024-07-22 18:34:25.155452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.350 [2024-07-22 18:34:25.155511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.350 [2024-07-22 18:34:25.155532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.350 [2024-07-22 18:34:25.160923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.350 [2024-07-22 18:34:25.160978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.350 [2024-07-22 18:34:25.160999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.350 [2024-07-22 18:34:25.166349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.350 [2024-07-22 18:34:25.166402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.350 [2024-07-22 18:34:25.166424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.351 [2024-07-22 18:34:25.171874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.351 [2024-07-22 18:34:25.171929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.351 [2024-07-22 18:34:25.171950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.351 [2024-07-22 18:34:25.177399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.351 [2024-07-22 18:34:25.177452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.351 [2024-07-22 18:34:25.177474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.351 [2024-07-22 18:34:25.182931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.351 [2024-07-22 18:34:25.183001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.351 [2024-07-22 18:34:25.183023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.351 [2024-07-22 18:34:25.188456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.351 [2024-07-22 18:34:25.188510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.351 [2024-07-22 18:34:25.188533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.351 [2024-07-22 18:34:25.194024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.351 [2024-07-22 18:34:25.194078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.351 [2024-07-22 18:34:25.194099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.351 [2024-07-22 18:34:25.199419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.351 [2024-07-22 18:34:25.199473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.351 [2024-07-22 18:34:25.199494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.351 [2024-07-22 18:34:25.204949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.351 [2024-07-22 18:34:25.205033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.351 [2024-07-22 18:34:25.205055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.351 [2024-07-22 18:34:25.210570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.351 [2024-07-22 18:34:25.210623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.351 [2024-07-22 18:34:25.210644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.351 [2024-07-22 18:34:25.216100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.351 [2024-07-22 18:34:25.216155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.351 [2024-07-22 18:34:25.216177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.351 [2024-07-22 18:34:25.221701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.351 [2024-07-22 18:34:25.221770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.351 [2024-07-22 18:34:25.221791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.351 [2024-07-22 18:34:25.227450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.351 [2024-07-22 18:34:25.227505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.351 [2024-07-22 18:34:25.227526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.351 [2024-07-22 18:34:25.232946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.351 [2024-07-22 18:34:25.233014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.351 [2024-07-22 18:34:25.233034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.351 [2024-07-22 18:34:25.238433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.351 [2024-07-22 18:34:25.238487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.351 [2024-07-22 18:34:25.238508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.351 [2024-07-22 18:34:25.243920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.351 [2024-07-22 18:34:25.243975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.351 [2024-07-22 18:34:25.244003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.351 [2024-07-22 18:34:25.249431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.351 [2024-07-22 18:34:25.249484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.351 [2024-07-22 18:34:25.249505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.351 [2024-07-22 18:34:25.254957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.351 [2024-07-22 18:34:25.255011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.351 [2024-07-22 18:34:25.255033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.351 [2024-07-22 18:34:25.260459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.351 [2024-07-22 18:34:25.260514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.351 [2024-07-22 18:34:25.260536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.351 [2024-07-22 18:34:25.266070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.351 [2024-07-22 18:34:25.266125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.351 [2024-07-22 18:34:25.266146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.351 [2024-07-22 18:34:25.271642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.351 [2024-07-22 18:34:25.271696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.351 [2024-07-22 18:34:25.271718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.351 [2024-07-22 18:34:25.277138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.351 [2024-07-22 18:34:25.277194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.351 [2024-07-22 18:34:25.277235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.351 [2024-07-22 18:34:25.282587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.351 [2024-07-22 18:34:25.282641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.351 [2024-07-22 18:34:25.282662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.351 [2024-07-22 18:34:25.288064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.351 [2024-07-22 18:34:25.288117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.351 [2024-07-22 18:34:25.288138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.351 [2024-07-22 18:34:25.293514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.351 [2024-07-22 18:34:25.293582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.351 [2024-07-22 18:34:25.293642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.351 [2024-07-22 18:34:25.298944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.351 [2024-07-22 18:34:25.298999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.351 [2024-07-22 18:34:25.299020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.351 [2024-07-22 18:34:25.304381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.351 [2024-07-22 18:34:25.304435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.351 [2024-07-22 18:34:25.304456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.351 [2024-07-22 18:34:25.309826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.351 [2024-07-22 18:34:25.309929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.351 [2024-07-22 18:34:25.309959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.351 [2024-07-22 18:34:25.315405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.351 [2024-07-22 18:34:25.315469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.352 [2024-07-22 18:34:25.315490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.352 [2024-07-22 18:34:25.320767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.352 [2024-07-22 18:34:25.320821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.352 [2024-07-22 18:34:25.320843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.352 [2024-07-22 18:34:25.326181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.352 [2024-07-22 18:34:25.326248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.352 [2024-07-22 18:34:25.326270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.352 [2024-07-22 18:34:25.331718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.352 [2024-07-22 18:34:25.331787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.352 [2024-07-22 18:34:25.331809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.352 [2024-07-22 18:34:25.337145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.352 [2024-07-22 18:34:25.337199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.352 [2024-07-22 18:34:25.337239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.352 [2024-07-22 18:34:25.342656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.352 [2024-07-22 18:34:25.342725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.352 [2024-07-22 18:34:25.342746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.352 [2024-07-22 18:34:25.348077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.352 [2024-07-22 18:34:25.348132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.352 [2024-07-22 18:34:25.348153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.352 [2024-07-22 18:34:25.353487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.352 [2024-07-22 18:34:25.353541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.352 [2024-07-22 18:34:25.353561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.352 [2024-07-22 18:34:25.358786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.352 [2024-07-22 18:34:25.358839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.352 [2024-07-22 18:34:25.358861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.352 [2024-07-22 18:34:25.364122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.352 [2024-07-22 18:34:25.364177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.352 [2024-07-22 18:34:25.364198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.612 [2024-07-22 18:34:25.369760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.612 [2024-07-22 18:34:25.369838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.612 [2024-07-22 18:34:25.369860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.612 [2024-07-22 18:34:25.375427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.612 [2024-07-22 18:34:25.375480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.612 [2024-07-22 18:34:25.375501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.612 [2024-07-22 18:34:25.380949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.612 [2024-07-22 18:34:25.381004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.612 [2024-07-22 18:34:25.381026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.612 [2024-07-22 18:34:25.386493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.612 [2024-07-22 18:34:25.386548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.612 [2024-07-22 18:34:25.386569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.612 [2024-07-22 18:34:25.392125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.612 [2024-07-22 18:34:25.392180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.612 [2024-07-22 18:34:25.392202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.612 [2024-07-22 18:34:25.397629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.612 [2024-07-22 18:34:25.397683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.612 [2024-07-22 18:34:25.397704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.612 [2024-07-22 18:34:25.403173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.612 [2024-07-22 18:34:25.403244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.612 [2024-07-22 18:34:25.403266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.612 [2024-07-22 18:34:25.408634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.612 [2024-07-22 18:34:25.408687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.612 [2024-07-22 18:34:25.408709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.612 [2024-07-22 18:34:25.414153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.612 [2024-07-22 18:34:25.414222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.612 [2024-07-22 18:34:25.414245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.612 [2024-07-22 18:34:25.419859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.612 [2024-07-22 18:34:25.419914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.612 [2024-07-22 18:34:25.419935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.612 [2024-07-22 18:34:25.425323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.612 [2024-07-22 18:34:25.425376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.612 [2024-07-22 18:34:25.425398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.612 [2024-07-22 18:34:25.430747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.612 [2024-07-22 18:34:25.430800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.612 [2024-07-22 18:34:25.430821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.612 [2024-07-22 18:34:25.436390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.612 [2024-07-22 18:34:25.436443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.612 [2024-07-22 18:34:25.436465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.612 [2024-07-22 18:34:25.441833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.612 [2024-07-22 18:34:25.441897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.612 [2024-07-22 18:34:25.441919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.612 [2024-07-22 18:34:25.447402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.612 [2024-07-22 18:34:25.447455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.612 [2024-07-22 18:34:25.447476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.612 [2024-07-22 18:34:25.452995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.612 [2024-07-22 18:34:25.453064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.612 [2024-07-22 18:34:25.453086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.612 [2024-07-22 18:34:25.458673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.612 [2024-07-22 18:34:25.458739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.612 [2024-07-22 18:34:25.458759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.612 [2024-07-22 18:34:25.464208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.612 [2024-07-22 18:34:25.464277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.612 [2024-07-22 18:34:25.464297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.612 [2024-07-22 18:34:25.469527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.612 [2024-07-22 18:34:25.469580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.612 [2024-07-22 18:34:25.469601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.612 [2024-07-22 18:34:25.475139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.612 [2024-07-22 18:34:25.475198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.612 [2024-07-22 18:34:25.475240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.612 [2024-07-22 18:34:25.480705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.612 [2024-07-22 18:34:25.480775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.613 [2024-07-22 18:34:25.480796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.613 [2024-07-22 18:34:25.486240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.613 [2024-07-22 18:34:25.486290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.613 [2024-07-22 18:34:25.486310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.613 [2024-07-22 18:34:25.491574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.613 [2024-07-22 18:34:25.491628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.613 [2024-07-22 18:34:25.491650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.613 [2024-07-22 18:34:25.497087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.613 [2024-07-22 18:34:25.497143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.613 [2024-07-22 18:34:25.497165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.613 [2024-07-22 18:34:25.502515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.613 [2024-07-22 18:34:25.502569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.613 [2024-07-22 18:34:25.502590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.613 [2024-07-22 18:34:25.508140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.613 [2024-07-22 18:34:25.508193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.613 [2024-07-22 18:34:25.508234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.613 [2024-07-22 18:34:25.513671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.613 [2024-07-22 18:34:25.513725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.613 [2024-07-22 18:34:25.513747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.613 [2024-07-22 18:34:25.519093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.613 [2024-07-22 18:34:25.519146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.613 [2024-07-22 18:34:25.519167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.613 [2024-07-22 18:34:25.524548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.613 [2024-07-22 18:34:25.524604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.613 [2024-07-22 18:34:25.524625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.613 [2024-07-22 18:34:25.530060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.613 [2024-07-22 18:34:25.530143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.613 [2024-07-22 18:34:25.530165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.613 [2024-07-22 18:34:25.535764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.613 [2024-07-22 18:34:25.535833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.613 [2024-07-22 18:34:25.535854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.613 [2024-07-22 18:34:25.541273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.613 [2024-07-22 18:34:25.541327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.613 [2024-07-22 18:34:25.541349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.613 [2024-07-22 18:34:25.546711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.613 [2024-07-22 18:34:25.546764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.613 [2024-07-22 18:34:25.546786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.613 [2024-07-22 18:34:25.552155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.613 [2024-07-22 18:34:25.552226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.613 [2024-07-22 18:34:25.552249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.613 [2024-07-22 18:34:25.557626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.613 [2024-07-22 18:34:25.557694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.613 [2024-07-22 18:34:25.557716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.613 [2024-07-22 18:34:25.563210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.613 [2024-07-22 18:34:25.563283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.613 [2024-07-22 18:34:25.563305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.613 [2024-07-22 18:34:25.568683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.613 [2024-07-22 18:34:25.568751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.613 [2024-07-22 18:34:25.568772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.613 [2024-07-22 18:34:25.574162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.613 [2024-07-22 18:34:25.574230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.613 [2024-07-22 18:34:25.574253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.613 [2024-07-22 18:34:25.579581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.613 [2024-07-22 18:34:25.579649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.613 [2024-07-22 18:34:25.579670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.613 [2024-07-22 18:34:25.585006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.613 [2024-07-22 18:34:25.585061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.613 [2024-07-22 18:34:25.585083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.613 [2024-07-22 18:34:25.590474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.613 [2024-07-22 18:34:25.590528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.613 [2024-07-22 18:34:25.590550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.613 [2024-07-22 18:34:25.595968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.613 [2024-07-22 18:34:25.596022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.613 [2024-07-22 18:34:25.596044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.613 [2024-07-22 18:34:25.601437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.613 [2024-07-22 18:34:25.601492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.613 [2024-07-22 18:34:25.601513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.613 [2024-07-22 18:34:25.607019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.613 [2024-07-22 18:34:25.607089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.613 [2024-07-22 18:34:25.607110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.613 [2024-07-22 18:34:25.612528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.613 [2024-07-22 18:34:25.612597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.613 [2024-07-22 18:34:25.612634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.613 [2024-07-22 18:34:25.618087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.613 [2024-07-22 18:34:25.618141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.613 [2024-07-22 18:34:25.618162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.613 [2024-07-22 18:34:25.623683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.613 [2024-07-22 18:34:25.623736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.613 [2024-07-22 18:34:25.623757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.873 [2024-07-22 18:34:25.629169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.874 [2024-07-22 18:34:25.629237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.874 [2024-07-22 18:34:25.629260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.874 [2024-07-22 18:34:25.634666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.874 [2024-07-22 18:34:25.634734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.874 [2024-07-22 18:34:25.634755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.874 [2024-07-22 18:34:25.640188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.874 [2024-07-22 18:34:25.640273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.874 [2024-07-22 18:34:25.640296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.874 [2024-07-22 18:34:25.645669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.874 [2024-07-22 18:34:25.645737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.874 [2024-07-22 18:34:25.645759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.874 [2024-07-22 18:34:25.651173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.874 [2024-07-22 18:34:25.651246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.874 [2024-07-22 18:34:25.651268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.874 [2024-07-22 18:34:25.656670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.874 [2024-07-22 18:34:25.656739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.874 [2024-07-22 18:34:25.656760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.874 [2024-07-22 18:34:25.662188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.874 [2024-07-22 18:34:25.662259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.874 [2024-07-22 18:34:25.662281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.874 [2024-07-22 18:34:25.667693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.874 [2024-07-22 18:34:25.667747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.874 [2024-07-22 18:34:25.667770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.874 [2024-07-22 18:34:25.673136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.874 [2024-07-22 18:34:25.673191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.874 [2024-07-22 18:34:25.673230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.874 [2024-07-22 18:34:25.678671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.874 [2024-07-22 18:34:25.678726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.874 [2024-07-22 18:34:25.678747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.874 [2024-07-22 18:34:25.684192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.874 [2024-07-22 18:34:25.684259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.874 [2024-07-22 18:34:25.684281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.874 [2024-07-22 18:34:25.689700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.874 [2024-07-22 18:34:25.689772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.874 [2024-07-22 18:34:25.689794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.874 [2024-07-22 18:34:25.695360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.874 [2024-07-22 18:34:25.695414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.874 [2024-07-22 18:34:25.695436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.874 [2024-07-22 18:34:25.700875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.874 [2024-07-22 18:34:25.700930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.874 [2024-07-22 18:34:25.700951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.874 [2024-07-22 18:34:25.706411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.874 [2024-07-22 18:34:25.706465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.874 [2024-07-22 18:34:25.706487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.874 [2024-07-22 18:34:25.711838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.874 [2024-07-22 18:34:25.711894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.874 [2024-07-22 18:34:25.711915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.874 [2024-07-22 18:34:25.717319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.874 [2024-07-22 18:34:25.717374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.874 [2024-07-22 18:34:25.717396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.874 [2024-07-22 18:34:25.722767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.874 [2024-07-22 18:34:25.722820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.874 [2024-07-22 18:34:25.722841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.874 [2024-07-22 18:34:25.728219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.874 [2024-07-22 18:34:25.728271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.874 [2024-07-22 18:34:25.728292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.874 [2024-07-22 18:34:25.733580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.874 [2024-07-22 18:34:25.733633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.874 [2024-07-22 18:34:25.733655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.874 [2024-07-22 18:34:25.739023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.874 [2024-07-22 18:34:25.739078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.874 [2024-07-22 18:34:25.739100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.874 [2024-07-22 18:34:25.744511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.874 [2024-07-22 18:34:25.744566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.874 [2024-07-22 18:34:25.744589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.874 [2024-07-22 18:34:25.749902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.874 [2024-07-22 18:34:25.749956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.874 [2024-07-22 18:34:25.749977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.874 [2024-07-22 18:34:25.755327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.874 [2024-07-22 18:34:25.755382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.874 [2024-07-22 18:34:25.755403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.874 [2024-07-22 18:34:25.760702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.874 [2024-07-22 18:34:25.760756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.874 [2024-07-22 18:34:25.760777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.874 [2024-07-22 18:34:25.766086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.874 [2024-07-22 18:34:25.766141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.874 [2024-07-22 18:34:25.766162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.874 [2024-07-22 18:34:25.771475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.874 [2024-07-22 18:34:25.771529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.875 [2024-07-22 18:34:25.771551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.875 [2024-07-22 18:34:25.776777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.875 [2024-07-22 18:34:25.776831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.875 [2024-07-22 18:34:25.776852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.875 [2024-07-22 18:34:25.782193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.875 [2024-07-22 18:34:25.782261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.875 [2024-07-22 18:34:25.782284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.875 [2024-07-22 18:34:25.787635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.875 [2024-07-22 18:34:25.787689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.875 [2024-07-22 18:34:25.787711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.875 [2024-07-22 18:34:25.793095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.875 [2024-07-22 18:34:25.793149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.875 [2024-07-22 18:34:25.793171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.875 [2024-07-22 18:34:25.798615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.875 [2024-07-22 18:34:25.798669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.875 [2024-07-22 18:34:25.798691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.875 [2024-07-22 18:34:25.804093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.875 [2024-07-22 18:34:25.804149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.875 [2024-07-22 18:34:25.804170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.875 [2024-07-22 18:34:25.809557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.875 [2024-07-22 18:34:25.809611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.875 [2024-07-22 18:34:25.809634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.875 [2024-07-22 18:34:25.815063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.875 [2024-07-22 18:34:25.815118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.875 [2024-07-22 18:34:25.815139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.875 [2024-07-22 18:34:25.820542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.875 [2024-07-22 18:34:25.820596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.875 [2024-07-22 18:34:25.820618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.875 [2024-07-22 18:34:25.826076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.875 [2024-07-22 18:34:25.826130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.875 [2024-07-22 18:34:25.826151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.875 [2024-07-22 18:34:25.831612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.875 [2024-07-22 18:34:25.831666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.875 [2024-07-22 18:34:25.831687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.875 [2024-07-22 18:34:25.837143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.875 [2024-07-22 18:34:25.837197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.875 [2024-07-22 18:34:25.837236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.875 [2024-07-22 18:34:25.842713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.875 [2024-07-22 18:34:25.842766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.875 [2024-07-22 18:34:25.842787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.875 [2024-07-22 18:34:25.848152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.875 [2024-07-22 18:34:25.848226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.875 [2024-07-22 18:34:25.848250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.875 [2024-07-22 18:34:25.853630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.875 [2024-07-22 18:34:25.853684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.875 [2024-07-22 18:34:25.853705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.875 [2024-07-22 18:34:25.859081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.875 [2024-07-22 18:34:25.859137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.875 [2024-07-22 18:34:25.859158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.875 [2024-07-22 18:34:25.864487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.875 [2024-07-22 18:34:25.864541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.875 [2024-07-22 18:34:25.864562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.875 [2024-07-22 18:34:25.869984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.875 [2024-07-22 18:34:25.870039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.875 [2024-07-22 18:34:25.870060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.875 [2024-07-22 18:34:25.875490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.875 [2024-07-22 18:34:25.875545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.875 [2024-07-22 18:34:25.875566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.875 [2024-07-22 18:34:25.880892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.875 [2024-07-22 18:34:25.880946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.875 [2024-07-22 18:34:25.880968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.875 [2024-07-22 18:34:25.886419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:13.875 [2024-07-22 18:34:25.886473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.875 [2024-07-22 18:34:25.886495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.134 [2024-07-22 18:34:25.891883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:14.134 [2024-07-22 18:34:25.891937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.134 [2024-07-22 18:34:25.891958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.134 [2024-07-22 18:34:25.897417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:14.134 [2024-07-22 18:34:25.897489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.134 [2024-07-22 18:34:25.897518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.134 [2024-07-22 18:34:25.902932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:14.134 [2024-07-22 18:34:25.902987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.134 [2024-07-22 18:34:25.903008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.134 [2024-07-22 18:34:25.908437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:14.134 [2024-07-22 18:34:25.908491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.134 [2024-07-22 18:34:25.908513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.134 [2024-07-22 18:34:25.913901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:14.134 [2024-07-22 18:34:25.913953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.134 [2024-07-22 18:34:25.913975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.134 [2024-07-22 18:34:25.919242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:27:14.134 [2024-07-22 18:34:25.919295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.134 [2024-07-22 18:34:25.919316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.134 00:27:14.134 Latency(us) 00:27:14.134 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:14.134 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:14.134 nvme0n1 : 2.00 5577.52 697.19 0.00 0.00 2863.58 2517.18 6404.65 00:27:14.134 =================================================================================================================== 00:27:14.134 Total : 5577.52 697.19 0.00 0.00 2863.58 2517.18 6404.65 00:27:14.134 0 00:27:14.134 18:34:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:14.134 18:34:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:14.134 | .driver_specific 00:27:14.134 | .nvme_error 00:27:14.134 | .status_code 00:27:14.134 | .command_transient_transport_error' 00:27:14.134 18:34:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:14.134 18:34:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:14.393 18:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 360 > 0 )) 00:27:14.393 18:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 87632 00:27:14.393 18:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 87632 ']' 00:27:14.393 18:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 87632 00:27:14.393 18:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:27:14.393 18:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:14.393 18:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87632 00:27:14.393 killing process with pid 87632 00:27:14.393 Received shutdown signal, test time was about 2.000000 seconds 00:27:14.393 00:27:14.393 Latency(us) 00:27:14.393 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:14.393 =================================================================================================================== 00:27:14.393 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:14.393 18:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:14.393 18:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:14.393 18:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87632' 00:27:14.393 18:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 87632 00:27:14.393 18:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 87632 00:27:15.796 18:34:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:27:15.796 18:34:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:15.796 18:34:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:15.796 18:34:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:15.796 18:34:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:15.796 18:34:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=87699 00:27:15.796 18:34:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 87699 /var/tmp/bperf.sock 00:27:15.796 18:34:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:27:15.796 18:34:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 87699 ']' 00:27:15.796 18:34:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:15.796 18:34:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:15.796 18:34:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:15.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:15.796 18:34:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:15.796 18:34:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:15.796 [2024-07-22 18:34:27.540076] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:27:15.796 [2024-07-22 18:34:27.540516] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87699 ] 00:27:15.796 [2024-07-22 18:34:27.715924] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:16.054 [2024-07-22 18:34:27.952319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:16.311 [2024-07-22 18:34:28.158052] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:27:16.570 18:34:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:16.570 18:34:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:27:16.570 18:34:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:16.570 18:34:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:16.828 18:34:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:16.828 18:34:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.828 18:34:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:16.828 18:34:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.828 18:34:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:16.828 18:34:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:17.086 nvme0n1 00:27:17.086 18:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:17.086 18:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.086 18:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:17.086 18:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.086 18:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:17.086 18:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:17.344 Running I/O for 2 seconds... 00:27:17.344 [2024-07-22 18:34:29.193178] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fef90 00:27:17.344 [2024-07-22 18:34:29.196295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.344 [2024-07-22 18:34:29.196365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.344 [2024-07-22 18:34:29.213643] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195feb58 00:27:17.344 [2024-07-22 18:34:29.216747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.344 [2024-07-22 18:34:29.216818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:17.344 [2024-07-22 18:34:29.233737] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:27:17.344 [2024-07-22 18:34:29.236785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.344 [2024-07-22 18:34:29.236844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:17.344 [2024-07-22 18:34:29.253636] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fda78 00:27:17.344 [2024-07-22 18:34:29.256631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.344 [2024-07-22 18:34:29.256690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:17.344 [2024-07-22 18:34:29.273568] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fd208 00:27:17.344 [2024-07-22 18:34:29.276544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.344 [2024-07-22 18:34:29.276615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:17.344 [2024-07-22 18:34:29.293443] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fc998 00:27:17.344 [2024-07-22 18:34:29.296372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.344 [2024-07-22 18:34:29.296430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:17.344 [2024-07-22 18:34:29.313263] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fc128 00:27:17.344 [2024-07-22 18:34:29.316213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.344 [2024-07-22 18:34:29.316285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:17.344 [2024-07-22 18:34:29.333170] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fb8b8 00:27:17.345 [2024-07-22 18:34:29.336060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.345 [2024-07-22 18:34:29.336118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:17.345 [2024-07-22 18:34:29.353636] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fb048 00:27:17.345 [2024-07-22 18:34:29.356589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:24570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.345 [2024-07-22 18:34:29.356648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:17.644 [2024-07-22 18:34:29.373820] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fa7d8 00:27:17.644 [2024-07-22 18:34:29.376701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:25477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.644 [2024-07-22 18:34:29.376758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:17.644 [2024-07-22 18:34:29.393650] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f9f68 00:27:17.644 [2024-07-22 18:34:29.396488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.644 [2024-07-22 18:34:29.396550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:17.644 [2024-07-22 18:34:29.413436] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f96f8 00:27:17.644 [2024-07-22 18:34:29.416235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.644 [2024-07-22 18:34:29.416292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:17.644 [2024-07-22 18:34:29.433261] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f8e88 00:27:17.644 [2024-07-22 18:34:29.436412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:25335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.644 [2024-07-22 18:34:29.436470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:17.644 [2024-07-22 18:34:29.453864] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f8618 00:27:17.644 [2024-07-22 18:34:29.456698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.644 [2024-07-22 18:34:29.456754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:17.644 [2024-07-22 18:34:29.474004] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7da8 00:27:17.644 [2024-07-22 18:34:29.476742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.644 [2024-07-22 18:34:29.476804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:17.644 [2024-07-22 18:34:29.494050] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f7538 00:27:17.644 [2024-07-22 18:34:29.496811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.644 [2024-07-22 18:34:29.496877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:17.644 [2024-07-22 18:34:29.514537] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6cc8 00:27:17.644 [2024-07-22 18:34:29.517230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:23517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.644 [2024-07-22 18:34:29.517293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.644 [2024-07-22 18:34:29.534706] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f6458 00:27:17.644 [2024-07-22 18:34:29.537468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.644 [2024-07-22 18:34:29.537526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:17.644 [2024-07-22 18:34:29.554679] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f5be8 00:27:17.644 [2024-07-22 18:34:29.557298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:17052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.644 [2024-07-22 18:34:29.557356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.644 [2024-07-22 18:34:29.574704] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f5378 00:27:17.644 [2024-07-22 18:34:29.577300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:9369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.644 [2024-07-22 18:34:29.577360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:17.644 [2024-07-22 18:34:29.594705] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4b08 00:27:17.644 [2024-07-22 18:34:29.597315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:17978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.644 [2024-07-22 18:34:29.597373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:17.644 [2024-07-22 18:34:29.614811] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f4298 00:27:17.644 [2024-07-22 18:34:29.617500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:25455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.644 [2024-07-22 18:34:29.617555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:17.925 [2024-07-22 18:34:29.634908] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f3a28 00:27:17.925 [2024-07-22 18:34:29.637486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.925 [2024-07-22 18:34:29.637550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:17.925 [2024-07-22 18:34:29.655519] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f31b8 00:27:17.925 [2024-07-22 18:34:29.658070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.925 [2024-07-22 18:34:29.658134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:17.925 [2024-07-22 18:34:29.675822] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f2948 00:27:17.925 [2024-07-22 18:34:29.678386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.925 [2024-07-22 18:34:29.678445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:17.925 [2024-07-22 18:34:29.695940] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f20d8 00:27:17.925 [2024-07-22 18:34:29.698450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:11207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.925 [2024-07-22 18:34:29.698509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:17.925 [2024-07-22 18:34:29.715987] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f1868 00:27:17.925 [2024-07-22 18:34:29.718445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.925 [2024-07-22 18:34:29.718506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:17.925 [2024-07-22 18:34:29.736017] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0ff8 00:27:17.925 [2024-07-22 18:34:29.738459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.925 [2024-07-22 18:34:29.738518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:17.925 [2024-07-22 18:34:29.756222] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195f0788 00:27:17.925 [2024-07-22 18:34:29.758629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.925 [2024-07-22 18:34:29.758689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:17.925 [2024-07-22 18:34:29.776320] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:27:17.925 [2024-07-22 18:34:29.778693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:13713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.925 [2024-07-22 18:34:29.778753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:17.925 [2024-07-22 18:34:29.796335] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ef6a8 00:27:17.925 [2024-07-22 18:34:29.798716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.925 [2024-07-22 18:34:29.798774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:17.925 [2024-07-22 18:34:29.816292] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eee38 00:27:17.925 [2024-07-22 18:34:29.818615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.925 [2024-07-22 18:34:29.818675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:17.925 [2024-07-22 18:34:29.836141] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ee5c8 00:27:17.925 [2024-07-22 18:34:29.838449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:15660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.925 [2024-07-22 18:34:29.838509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.925 [2024-07-22 18:34:29.855959] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195edd58 00:27:17.925 [2024-07-22 18:34:29.858248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:6647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.925 [2024-07-22 18:34:29.858310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:17.925 [2024-07-22 18:34:29.875776] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ed4e8 00:27:17.925 [2024-07-22 18:34:29.878047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:18011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.925 [2024-07-22 18:34:29.878116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:17.925 [2024-07-22 18:34:29.895587] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ecc78 00:27:17.925 [2024-07-22 18:34:29.897811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:14969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.925 [2024-07-22 18:34:29.897869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:17.925 [2024-07-22 18:34:29.915328] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ec408 00:27:17.925 [2024-07-22 18:34:29.917516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.926 [2024-07-22 18:34:29.917573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:17.926 [2024-07-22 18:34:29.935082] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ebb98 00:27:17.926 [2024-07-22 18:34:29.937260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.926 [2024-07-22 18:34:29.937318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:18.184 [2024-07-22 18:34:29.954983] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eb328 00:27:18.184 [2024-07-22 18:34:29.957141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:49 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.184 [2024-07-22 18:34:29.957217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:18.184 [2024-07-22 18:34:29.975295] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eaab8 00:27:18.184 [2024-07-22 18:34:29.977464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.184 [2024-07-22 18:34:29.977526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:18.184 [2024-07-22 18:34:29.995626] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ea248 00:27:18.184 [2024-07-22 18:34:29.997743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:6382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.184 [2024-07-22 18:34:29.997818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:18.184 [2024-07-22 18:34:30.015850] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e99d8 00:27:18.184 [2024-07-22 18:34:30.017986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.184 [2024-07-22 18:34:30.018046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:18.184 [2024-07-22 18:34:30.036351] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e9168 00:27:18.184 [2024-07-22 18:34:30.038447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:15194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.184 [2024-07-22 18:34:30.038507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:18.184 [2024-07-22 18:34:30.056639] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e88f8 00:27:18.184 [2024-07-22 18:34:30.058702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.184 [2024-07-22 18:34:30.058792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:18.184 [2024-07-22 18:34:30.076998] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e8088 00:27:18.184 [2024-07-22 18:34:30.079059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:11050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.184 [2024-07-22 18:34:30.079130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:18.184 [2024-07-22 18:34:30.096920] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e7818 00:27:18.184 [2024-07-22 18:34:30.099648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:24700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.184 [2024-07-22 18:34:30.099704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:18.184 [2024-07-22 18:34:30.117121] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6fa8 00:27:18.184 [2024-07-22 18:34:30.119093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.184 [2024-07-22 18:34:30.119164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:18.184 [2024-07-22 18:34:30.136663] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6738 00:27:18.184 [2024-07-22 18:34:30.138630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:20364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.184 [2024-07-22 18:34:30.138701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:18.184 [2024-07-22 18:34:30.156177] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5ec8 00:27:18.184 [2024-07-22 18:34:30.158113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:6140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.184 [2024-07-22 18:34:30.158183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.184 [2024-07-22 18:34:30.175723] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5658 00:27:18.184 [2024-07-22 18:34:30.177610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.184 [2024-07-22 18:34:30.177665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:18.184 [2024-07-22 18:34:30.195102] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e4de8 00:27:18.184 [2024-07-22 18:34:30.196982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:14506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.184 [2024-07-22 18:34:30.197036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:18.443 [2024-07-22 18:34:30.215522] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e4578 00:27:18.443 [2024-07-22 18:34:30.217498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.443 [2024-07-22 18:34:30.217563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:18.443 [2024-07-22 18:34:30.236361] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e3d08 00:27:18.443 [2024-07-22 18:34:30.238200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.443 [2024-07-22 18:34:30.238270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:18.443 [2024-07-22 18:34:30.256296] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e3498 00:27:18.443 [2024-07-22 18:34:30.258104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.443 [2024-07-22 18:34:30.258160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:18.443 [2024-07-22 18:34:30.275890] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e2c28 00:27:18.443 [2024-07-22 18:34:30.277664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.443 [2024-07-22 18:34:30.277719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:18.443 [2024-07-22 18:34:30.295652] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e23b8 00:27:18.443 [2024-07-22 18:34:30.297400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.443 [2024-07-22 18:34:30.297455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:18.443 [2024-07-22 18:34:30.315337] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e1b48 00:27:18.443 [2024-07-22 18:34:30.317076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.443 [2024-07-22 18:34:30.317131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:18.443 [2024-07-22 18:34:30.335080] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e12d8 00:27:18.443 [2024-07-22 18:34:30.336801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.443 [2024-07-22 18:34:30.336855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:18.443 [2024-07-22 18:34:30.354973] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e0a68 00:27:18.443 [2024-07-22 18:34:30.356705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:3252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.443 [2024-07-22 18:34:30.356760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:18.443 [2024-07-22 18:34:30.374803] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e01f8 00:27:18.443 [2024-07-22 18:34:30.376451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.443 [2024-07-22 18:34:30.376509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:18.443 [2024-07-22 18:34:30.394557] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195df988 00:27:18.443 [2024-07-22 18:34:30.396171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:15270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.443 [2024-07-22 18:34:30.396237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:18.443 [2024-07-22 18:34:30.413973] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195df118 00:27:18.443 [2024-07-22 18:34:30.415583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.443 [2024-07-22 18:34:30.415636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:18.443 [2024-07-22 18:34:30.433389] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195de8a8 00:27:18.443 [2024-07-22 18:34:30.434969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.443 [2024-07-22 18:34:30.435023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:18.443 [2024-07-22 18:34:30.453553] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195de038 00:27:18.443 [2024-07-22 18:34:30.455264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.443 [2024-07-22 18:34:30.455327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:18.702 [2024-07-22 18:34:30.483328] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195de038 00:27:18.702 [2024-07-22 18:34:30.486450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.702 [2024-07-22 18:34:30.486507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.702 [2024-07-22 18:34:30.503301] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195de8a8 00:27:18.702 [2024-07-22 18:34:30.506370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.702 [2024-07-22 18:34:30.506421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:18.702 [2024-07-22 18:34:30.523312] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195df118 00:27:18.702 [2024-07-22 18:34:30.526380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.702 [2024-07-22 18:34:30.526431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:18.702 [2024-07-22 18:34:30.543254] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195df988 00:27:18.702 [2024-07-22 18:34:30.546271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.702 [2024-07-22 18:34:30.546320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:18.702 [2024-07-22 18:34:30.563029] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e01f8 00:27:18.702 [2024-07-22 18:34:30.566026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.702 [2024-07-22 18:34:30.566075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:18.702 [2024-07-22 18:34:30.582843] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e0a68 00:27:18.702 [2024-07-22 18:34:30.585776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.702 [2024-07-22 18:34:30.585826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:18.702 [2024-07-22 18:34:30.602601] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e12d8 00:27:18.702 [2024-07-22 18:34:30.605506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:11756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.702 [2024-07-22 18:34:30.605555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:18.702 [2024-07-22 18:34:30.622226] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e1b48 00:27:18.702 [2024-07-22 18:34:30.625092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.702 [2024-07-22 18:34:30.625142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:18.702 [2024-07-22 18:34:30.641931] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e23b8 00:27:18.702 [2024-07-22 18:34:30.644812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.702 [2024-07-22 18:34:30.644861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:18.702 [2024-07-22 18:34:30.661562] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e2c28 00:27:18.702 [2024-07-22 18:34:30.664399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:13656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.702 [2024-07-22 18:34:30.664449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:18.702 [2024-07-22 18:34:30.680991] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e3498 00:27:18.702 [2024-07-22 18:34:30.683826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:20910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.702 [2024-07-22 18:34:30.683887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:18.702 [2024-07-22 18:34:30.700473] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e3d08 00:27:18.702 [2024-07-22 18:34:30.703268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.703 [2024-07-22 18:34:30.703324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:18.961 [2024-07-22 18:34:30.719895] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e4578 00:27:18.961 [2024-07-22 18:34:30.722641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.961 [2024-07-22 18:34:30.722703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:18.961 [2024-07-22 18:34:30.739367] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e4de8 00:27:18.961 [2024-07-22 18:34:30.742114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.961 [2024-07-22 18:34:30.742169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:18.961 [2024-07-22 18:34:30.759017] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5658 00:27:18.961 [2024-07-22 18:34:30.761759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:20182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.961 [2024-07-22 18:34:30.761814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:18.961 [2024-07-22 18:34:30.779221] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e5ec8 00:27:18.961 [2024-07-22 18:34:30.782051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:15752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.961 [2024-07-22 18:34:30.782109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:18.961 [2024-07-22 18:34:30.799552] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6738 00:27:18.961 [2024-07-22 18:34:30.802265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:14251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.961 [2024-07-22 18:34:30.802321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:18.961 [2024-07-22 18:34:30.819545] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e6fa8 00:27:18.961 [2024-07-22 18:34:30.822233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:18135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.961 [2024-07-22 18:34:30.822287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:18.961 [2024-07-22 18:34:30.839290] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e7818 00:27:18.961 [2024-07-22 18:34:30.841925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:21110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.961 [2024-07-22 18:34:30.841992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:18.961 [2024-07-22 18:34:30.860138] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e8088 00:27:18.961 [2024-07-22 18:34:30.862883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.962 [2024-07-22 18:34:30.862939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:18.962 [2024-07-22 18:34:30.880994] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e88f8 00:27:18.962 [2024-07-22 18:34:30.883924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:12695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.962 [2024-07-22 18:34:30.883986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:18.962 [2024-07-22 18:34:30.902071] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e9168 00:27:18.962 [2024-07-22 18:34:30.904887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:6004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.962 [2024-07-22 18:34:30.904942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:18.962 [2024-07-22 18:34:30.923262] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195e99d8 00:27:18.962 [2024-07-22 18:34:30.926089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.962 [2024-07-22 18:34:30.926147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:18.962 [2024-07-22 18:34:30.944200] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ea248 00:27:18.962 [2024-07-22 18:34:30.946977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:25597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.962 [2024-07-22 18:34:30.947035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:18.962 [2024-07-22 18:34:30.965355] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eaab8 00:27:18.962 [2024-07-22 18:34:30.968101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:16207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.962 [2024-07-22 18:34:30.968159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:19.220 [2024-07-22 18:34:30.986526] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eb328 00:27:19.220 [2024-07-22 18:34:30.989038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.220 [2024-07-22 18:34:30.989096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:19.220 [2024-07-22 18:34:31.006976] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ebb98 00:27:19.220 [2024-07-22 18:34:31.009463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.220 [2024-07-22 18:34:31.009521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:19.220 [2024-07-22 18:34:31.026974] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ec408 00:27:19.220 [2024-07-22 18:34:31.029423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:25565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.220 [2024-07-22 18:34:31.029479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:19.220 [2024-07-22 18:34:31.046711] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ecc78 00:27:19.220 [2024-07-22 18:34:31.049091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.220 [2024-07-22 18:34:31.049147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:19.221 [2024-07-22 18:34:31.066915] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ed4e8 00:27:19.221 [2024-07-22 18:34:31.069710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.221 [2024-07-22 18:34:31.069768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:19.221 [2024-07-22 18:34:31.089145] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195edd58 00:27:19.221 [2024-07-22 18:34:31.091737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.221 [2024-07-22 18:34:31.091796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:19.221 [2024-07-22 18:34:31.110353] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ee5c8 00:27:19.221 [2024-07-22 18:34:31.112661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.221 [2024-07-22 18:34:31.112720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.221 [2024-07-22 18:34:31.130393] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eee38 00:27:19.221 [2024-07-22 18:34:31.132690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.221 [2024-07-22 18:34:31.132744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:19.221 [2024-07-22 18:34:31.150745] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195ef6a8 00:27:19.221 [2024-07-22 18:34:31.153232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.221 [2024-07-22 18:34:31.153297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:19.221 [2024-07-22 18:34:31.171991] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195eff18 00:27:19.221 [2024-07-22 18:34:31.174534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.221 [2024-07-22 18:34:31.174604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:19.221 00:27:19.221 Latency(us) 00:27:19.221 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:19.221 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:19.221 nvme0n1 : 2.01 12579.23 49.14 0.00 0.00 10164.90 4915.20 38844.97 00:27:19.221 =================================================================================================================== 00:27:19.221 Total : 12579.23 49.14 0.00 0.00 10164.90 4915.20 38844.97 00:27:19.221 0 00:27:19.221 18:34:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:19.221 18:34:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:19.221 18:34:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:19.221 | .driver_specific 00:27:19.221 | .nvme_error 00:27:19.221 | .status_code 00:27:19.221 | .command_transient_transport_error' 00:27:19.221 18:34:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:19.479 18:34:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 99 > 0 )) 00:27:19.479 18:34:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 87699 00:27:19.479 18:34:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 87699 ']' 00:27:19.479 18:34:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 87699 00:27:19.479 18:34:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:27:19.479 18:34:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:19.479 18:34:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87699 00:27:19.479 killing process with pid 87699 00:27:19.479 Received shutdown signal, test time was about 2.000000 seconds 00:27:19.479 00:27:19.479 Latency(us) 00:27:19.479 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:19.479 =================================================================================================================== 00:27:19.479 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:19.479 18:34:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:19.479 18:34:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:19.479 18:34:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87699' 00:27:19.479 18:34:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 87699 00:27:19.479 18:34:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 87699 00:27:20.854 18:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:27:20.854 18:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:20.854 18:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:20.854 18:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:20.854 18:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:20.854 18:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=87765 00:27:20.854 18:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:27:20.854 18:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 87765 /var/tmp/bperf.sock 00:27:20.854 18:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 87765 ']' 00:27:20.854 18:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:20.854 18:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:20.854 18:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:20.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:20.854 18:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:20.854 18:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:20.854 [2024-07-22 18:34:32.653482] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:27:20.854 [2024-07-22 18:34:32.654075] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87765 ] 00:27:20.854 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:20.854 Zero copy mechanism will not be used. 00:27:20.854 [2024-07-22 18:34:32.838395] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:21.113 [2024-07-22 18:34:33.120449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:21.371 [2024-07-22 18:34:33.325456] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:27:21.629 18:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:21.629 18:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:27:21.629 18:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:21.629 18:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:21.887 18:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:21.887 18:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.887 18:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:22.146 18:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.146 18:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:22.146 18:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:22.407 nvme0n1 00:27:22.407 18:34:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:22.407 18:34:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.407 18:34:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:22.407 18:34:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.407 18:34:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:22.407 18:34:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:22.407 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:22.407 Zero copy mechanism will not be used. 00:27:22.407 Running I/O for 2 seconds... 00:27:22.407 [2024-07-22 18:34:34.324737] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.407 [2024-07-22 18:34:34.325157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.407 [2024-07-22 18:34:34.325204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.407 [2024-07-22 18:34:34.331585] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.407 [2024-07-22 18:34:34.331999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.407 [2024-07-22 18:34:34.332038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.407 [2024-07-22 18:34:34.338498] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.407 [2024-07-22 18:34:34.338909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.407 [2024-07-22 18:34:34.338946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.407 [2024-07-22 18:34:34.345286] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.407 [2024-07-22 18:34:34.345682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.407 [2024-07-22 18:34:34.345730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.407 [2024-07-22 18:34:34.352007] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.408 [2024-07-22 18:34:34.352406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.408 [2024-07-22 18:34:34.352445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.408 [2024-07-22 18:34:34.358553] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.408 [2024-07-22 18:34:34.358920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.408 [2024-07-22 18:34:34.358958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.408 [2024-07-22 18:34:34.364935] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.408 [2024-07-22 18:34:34.365297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.408 [2024-07-22 18:34:34.365342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.408 [2024-07-22 18:34:34.371430] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.408 [2024-07-22 18:34:34.371780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.408 [2024-07-22 18:34:34.371817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.408 [2024-07-22 18:34:34.377935] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.408 [2024-07-22 18:34:34.378321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.408 [2024-07-22 18:34:34.378362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.408 [2024-07-22 18:34:34.384484] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.408 [2024-07-22 18:34:34.384854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.408 [2024-07-22 18:34:34.384899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.408 [2024-07-22 18:34:34.391162] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.408 [2024-07-22 18:34:34.391522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.408 [2024-07-22 18:34:34.391559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.408 [2024-07-22 18:34:34.397401] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.408 [2024-07-22 18:34:34.397741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.408 [2024-07-22 18:34:34.397776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.408 [2024-07-22 18:34:34.404134] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.408 [2024-07-22 18:34:34.404534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.408 [2024-07-22 18:34:34.404587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.408 [2024-07-22 18:34:34.411002] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.408 [2024-07-22 18:34:34.411418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.408 [2024-07-22 18:34:34.411456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.408 [2024-07-22 18:34:34.417707] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.408 [2024-07-22 18:34:34.418099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.408 [2024-07-22 18:34:34.418137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.667 [2024-07-22 18:34:34.424574] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.667 [2024-07-22 18:34:34.424918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.667 [2024-07-22 18:34:34.424965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.667 [2024-07-22 18:34:34.431304] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.667 [2024-07-22 18:34:34.431694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.667 [2024-07-22 18:34:34.431748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.667 [2024-07-22 18:34:34.438124] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.667 [2024-07-22 18:34:34.438500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.667 [2024-07-22 18:34:34.438538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.667 [2024-07-22 18:34:34.444859] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.667 [2024-07-22 18:34:34.445256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.667 [2024-07-22 18:34:34.445315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.667 [2024-07-22 18:34:34.451710] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.667 [2024-07-22 18:34:34.452070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.667 [2024-07-22 18:34:34.452123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.667 [2024-07-22 18:34:34.457975] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.667 [2024-07-22 18:34:34.458388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.667 [2024-07-22 18:34:34.458424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.667 [2024-07-22 18:34:34.464280] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.667 [2024-07-22 18:34:34.464600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.667 [2024-07-22 18:34:34.464675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.667 [2024-07-22 18:34:34.470546] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.667 [2024-07-22 18:34:34.470898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.667 [2024-07-22 18:34:34.470934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.667 [2024-07-22 18:34:34.476898] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.667 [2024-07-22 18:34:34.477274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.667 [2024-07-22 18:34:34.477326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.668 [2024-07-22 18:34:34.483522] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.668 [2024-07-22 18:34:34.483896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.668 [2024-07-22 18:34:34.483959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.668 [2024-07-22 18:34:34.490263] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.668 [2024-07-22 18:34:34.490616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.668 [2024-07-22 18:34:34.490664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.668 [2024-07-22 18:34:34.496714] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.668 [2024-07-22 18:34:34.497068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.668 [2024-07-22 18:34:34.497106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.668 [2024-07-22 18:34:34.503291] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.668 [2024-07-22 18:34:34.503649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.668 [2024-07-22 18:34:34.503696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.668 [2024-07-22 18:34:34.509990] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.668 [2024-07-22 18:34:34.510368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.668 [2024-07-22 18:34:34.510418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.668 [2024-07-22 18:34:34.516544] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.668 [2024-07-22 18:34:34.516906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.668 [2024-07-22 18:34:34.516945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.668 [2024-07-22 18:34:34.523091] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.668 [2024-07-22 18:34:34.523460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.668 [2024-07-22 18:34:34.523506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.668 [2024-07-22 18:34:34.529803] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.668 [2024-07-22 18:34:34.530172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.668 [2024-07-22 18:34:34.530231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.668 [2024-07-22 18:34:34.536557] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.668 [2024-07-22 18:34:34.536935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.668 [2024-07-22 18:34:34.536974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.668 [2024-07-22 18:34:34.543218] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.668 [2024-07-22 18:34:34.543567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.668 [2024-07-22 18:34:34.543613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.668 [2024-07-22 18:34:34.549859] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.668 [2024-07-22 18:34:34.550245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.668 [2024-07-22 18:34:34.550293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.668 [2024-07-22 18:34:34.556448] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.668 [2024-07-22 18:34:34.556824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.668 [2024-07-22 18:34:34.556863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.668 [2024-07-22 18:34:34.563084] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.668 [2024-07-22 18:34:34.563445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.668 [2024-07-22 18:34:34.563496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.668 [2024-07-22 18:34:34.569589] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.668 [2024-07-22 18:34:34.569949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.668 [2024-07-22 18:34:34.569999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.668 [2024-07-22 18:34:34.576124] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.668 [2024-07-22 18:34:34.576505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.668 [2024-07-22 18:34:34.576548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.668 [2024-07-22 18:34:34.582613] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.668 [2024-07-22 18:34:34.582967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.668 [2024-07-22 18:34:34.583015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.668 [2024-07-22 18:34:34.589121] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.668 [2024-07-22 18:34:34.589488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.668 [2024-07-22 18:34:34.589540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.668 [2024-07-22 18:34:34.595711] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.668 [2024-07-22 18:34:34.596066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.668 [2024-07-22 18:34:34.596104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.668 [2024-07-22 18:34:34.602341] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.668 [2024-07-22 18:34:34.602720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.668 [2024-07-22 18:34:34.602768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.668 [2024-07-22 18:34:34.608911] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.668 [2024-07-22 18:34:34.609302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.668 [2024-07-22 18:34:34.609349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.668 [2024-07-22 18:34:34.615467] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.668 [2024-07-22 18:34:34.615832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.668 [2024-07-22 18:34:34.615869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.668 [2024-07-22 18:34:34.622127] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.668 [2024-07-22 18:34:34.622491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.668 [2024-07-22 18:34:34.622537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.668 [2024-07-22 18:34:34.628595] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.668 [2024-07-22 18:34:34.628950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.668 [2024-07-22 18:34:34.628997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.668 [2024-07-22 18:34:34.635175] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.668 [2024-07-22 18:34:34.635544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.668 [2024-07-22 18:34:34.635583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.668 [2024-07-22 18:34:34.641827] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.668 [2024-07-22 18:34:34.642185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.668 [2024-07-22 18:34:34.642258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.668 [2024-07-22 18:34:34.648378] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.668 [2024-07-22 18:34:34.648723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.668 [2024-07-22 18:34:34.648769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.668 [2024-07-22 18:34:34.654975] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.668 [2024-07-22 18:34:34.655349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.669 [2024-07-22 18:34:34.655388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.669 [2024-07-22 18:34:34.661568] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.669 [2024-07-22 18:34:34.661925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.669 [2024-07-22 18:34:34.661971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.669 [2024-07-22 18:34:34.668072] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.669 [2024-07-22 18:34:34.668440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.669 [2024-07-22 18:34:34.668485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.669 [2024-07-22 18:34:34.674654] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.669 [2024-07-22 18:34:34.675008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.669 [2024-07-22 18:34:34.675047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.669 [2024-07-22 18:34:34.681087] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.669 [2024-07-22 18:34:34.681449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.669 [2024-07-22 18:34:34.681501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.929 [2024-07-22 18:34:34.687587] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.929 [2024-07-22 18:34:34.687933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.929 [2024-07-22 18:34:34.687981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.929 [2024-07-22 18:34:34.694254] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.929 [2024-07-22 18:34:34.694610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.929 [2024-07-22 18:34:34.694648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.929 [2024-07-22 18:34:34.700829] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.929 [2024-07-22 18:34:34.701188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.929 [2024-07-22 18:34:34.701240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.929 [2024-07-22 18:34:34.707485] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.929 [2024-07-22 18:34:34.707837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.929 [2024-07-22 18:34:34.707885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.929 [2024-07-22 18:34:34.714115] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.929 [2024-07-22 18:34:34.714520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.929 [2024-07-22 18:34:34.714558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.929 [2024-07-22 18:34:34.720703] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.929 [2024-07-22 18:34:34.721068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.929 [2024-07-22 18:34:34.721106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.929 [2024-07-22 18:34:34.727305] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.929 [2024-07-22 18:34:34.727662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.929 [2024-07-22 18:34:34.727707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.929 [2024-07-22 18:34:34.733823] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.929 [2024-07-22 18:34:34.734219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.929 [2024-07-22 18:34:34.734257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.929 [2024-07-22 18:34:34.740484] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.929 [2024-07-22 18:34:34.740849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.929 [2024-07-22 18:34:34.740887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.929 [2024-07-22 18:34:34.747016] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.929 [2024-07-22 18:34:34.747378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.929 [2024-07-22 18:34:34.747428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.929 [2024-07-22 18:34:34.753523] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.929 [2024-07-22 18:34:34.753905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.929 [2024-07-22 18:34:34.753943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.929 [2024-07-22 18:34:34.760121] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.929 [2024-07-22 18:34:34.760494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.929 [2024-07-22 18:34:34.760538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.929 [2024-07-22 18:34:34.766690] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.929 [2024-07-22 18:34:34.767051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.929 [2024-07-22 18:34:34.767098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.929 [2024-07-22 18:34:34.773194] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.929 [2024-07-22 18:34:34.773565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.929 [2024-07-22 18:34:34.773604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.929 [2024-07-22 18:34:34.779738] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.929 [2024-07-22 18:34:34.780127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.929 [2024-07-22 18:34:34.780166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.929 [2024-07-22 18:34:34.786409] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.929 [2024-07-22 18:34:34.786789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.929 [2024-07-22 18:34:34.786836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.929 [2024-07-22 18:34:34.793142] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.929 [2024-07-22 18:34:34.793539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.929 [2024-07-22 18:34:34.793584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.929 [2024-07-22 18:34:34.799851] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.929 [2024-07-22 18:34:34.800234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.929 [2024-07-22 18:34:34.800292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.929 [2024-07-22 18:34:34.806402] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.929 [2024-07-22 18:34:34.806754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.929 [2024-07-22 18:34:34.806800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.929 [2024-07-22 18:34:34.812940] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.929 [2024-07-22 18:34:34.813314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.929 [2024-07-22 18:34:34.813352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.929 [2024-07-22 18:34:34.819461] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.930 [2024-07-22 18:34:34.819814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.930 [2024-07-22 18:34:34.819852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.930 [2024-07-22 18:34:34.826005] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.930 [2024-07-22 18:34:34.826380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.930 [2024-07-22 18:34:34.826432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.930 [2024-07-22 18:34:34.832512] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.930 [2024-07-22 18:34:34.832868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.930 [2024-07-22 18:34:34.832906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.930 [2024-07-22 18:34:34.838967] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.930 [2024-07-22 18:34:34.839348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.930 [2024-07-22 18:34:34.839386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.930 [2024-07-22 18:34:34.845529] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.930 [2024-07-22 18:34:34.845931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.930 [2024-07-22 18:34:34.845978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.930 [2024-07-22 18:34:34.852037] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.930 [2024-07-22 18:34:34.852416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.930 [2024-07-22 18:34:34.852455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.930 [2024-07-22 18:34:34.858530] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.930 [2024-07-22 18:34:34.858887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.930 [2024-07-22 18:34:34.858927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.930 [2024-07-22 18:34:34.865045] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.930 [2024-07-22 18:34:34.865409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.930 [2024-07-22 18:34:34.865457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.930 [2024-07-22 18:34:34.871560] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.930 [2024-07-22 18:34:34.871903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.930 [2024-07-22 18:34:34.871950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.930 [2024-07-22 18:34:34.878020] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.930 [2024-07-22 18:34:34.878403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.930 [2024-07-22 18:34:34.878442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.930 [2024-07-22 18:34:34.884709] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.930 [2024-07-22 18:34:34.885123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.930 [2024-07-22 18:34:34.885171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.930 [2024-07-22 18:34:34.891469] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.930 [2024-07-22 18:34:34.891824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.930 [2024-07-22 18:34:34.891870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.930 [2024-07-22 18:34:34.898047] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.930 [2024-07-22 18:34:34.898421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.930 [2024-07-22 18:34:34.898466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.930 [2024-07-22 18:34:34.904711] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.930 [2024-07-22 18:34:34.905064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.930 [2024-07-22 18:34:34.905112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.930 [2024-07-22 18:34:34.911357] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.930 [2024-07-22 18:34:34.911722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.930 [2024-07-22 18:34:34.911768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.930 [2024-07-22 18:34:34.917874] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.930 [2024-07-22 18:34:34.918262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.930 [2024-07-22 18:34:34.918310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.930 [2024-07-22 18:34:34.924450] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.930 [2024-07-22 18:34:34.924805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.930 [2024-07-22 18:34:34.924851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.930 [2024-07-22 18:34:34.930977] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.930 [2024-07-22 18:34:34.931340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.930 [2024-07-22 18:34:34.931386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.930 [2024-07-22 18:34:34.937478] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.930 [2024-07-22 18:34:34.937833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.930 [2024-07-22 18:34:34.937872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.930 [2024-07-22 18:34:34.943941] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:22.930 [2024-07-22 18:34:34.944304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.930 [2024-07-22 18:34:34.944350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.191 [2024-07-22 18:34:34.950515] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.191 [2024-07-22 18:34:34.950879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.191 [2024-07-22 18:34:34.950927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.191 [2024-07-22 18:34:34.957135] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.191 [2024-07-22 18:34:34.957530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.191 [2024-07-22 18:34:34.957575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.191 [2024-07-22 18:34:34.963818] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.191 [2024-07-22 18:34:34.964164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.191 [2024-07-22 18:34:34.964223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.191 [2024-07-22 18:34:34.970537] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.192 [2024-07-22 18:34:34.970887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.192 [2024-07-22 18:34:34.970934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.192 [2024-07-22 18:34:34.977117] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.192 [2024-07-22 18:34:34.977517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.192 [2024-07-22 18:34:34.977560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.192 [2024-07-22 18:34:34.983776] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.192 [2024-07-22 18:34:34.984125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.192 [2024-07-22 18:34:34.984173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.192 [2024-07-22 18:34:34.990798] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.192 [2024-07-22 18:34:34.991155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.192 [2024-07-22 18:34:34.991203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.192 [2024-07-22 18:34:34.997591] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.192 [2024-07-22 18:34:34.997970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.192 [2024-07-22 18:34:34.998008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.192 [2024-07-22 18:34:35.004319] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.192 [2024-07-22 18:34:35.004687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.192 [2024-07-22 18:34:35.004735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.192 [2024-07-22 18:34:35.011061] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.192 [2024-07-22 18:34:35.011436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.192 [2024-07-22 18:34:35.011491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.192 [2024-07-22 18:34:35.017776] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.192 [2024-07-22 18:34:35.018158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.192 [2024-07-22 18:34:35.018196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.192 [2024-07-22 18:34:35.024549] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.192 [2024-07-22 18:34:35.024898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.192 [2024-07-22 18:34:35.024945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.192 [2024-07-22 18:34:35.031261] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.192 [2024-07-22 18:34:35.031617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.192 [2024-07-22 18:34:35.031664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.192 [2024-07-22 18:34:35.037835] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.192 [2024-07-22 18:34:35.038218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.192 [2024-07-22 18:34:35.038256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.192 [2024-07-22 18:34:35.044415] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.192 [2024-07-22 18:34:35.044769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.192 [2024-07-22 18:34:35.044814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.192 [2024-07-22 18:34:35.051039] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.192 [2024-07-22 18:34:35.051407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.192 [2024-07-22 18:34:35.051460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.192 [2024-07-22 18:34:35.057571] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.192 [2024-07-22 18:34:35.057942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.192 [2024-07-22 18:34:35.057980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.192 [2024-07-22 18:34:35.064278] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.192 [2024-07-22 18:34:35.064621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.192 [2024-07-22 18:34:35.064675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.192 [2024-07-22 18:34:35.070977] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.192 [2024-07-22 18:34:35.071343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.192 [2024-07-22 18:34:35.071390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.192 [2024-07-22 18:34:35.077996] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.192 [2024-07-22 18:34:35.078393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.192 [2024-07-22 18:34:35.078443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.192 [2024-07-22 18:34:35.085074] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.192 [2024-07-22 18:34:35.085438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.192 [2024-07-22 18:34:35.085486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.192 [2024-07-22 18:34:35.093202] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.192 [2024-07-22 18:34:35.093625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.192 [2024-07-22 18:34:35.093663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.192 [2024-07-22 18:34:35.100260] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.192 [2024-07-22 18:34:35.100632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.192 [2024-07-22 18:34:35.100670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.192 [2024-07-22 18:34:35.107135] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.192 [2024-07-22 18:34:35.107507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.192 [2024-07-22 18:34:35.107548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.192 [2024-07-22 18:34:35.113833] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.192 [2024-07-22 18:34:35.114222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.192 [2024-07-22 18:34:35.114260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.192 [2024-07-22 18:34:35.120564] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.192 [2024-07-22 18:34:35.120924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.192 [2024-07-22 18:34:35.120962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.192 [2024-07-22 18:34:35.127119] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.192 [2024-07-22 18:34:35.127482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.192 [2024-07-22 18:34:35.127523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.192 [2024-07-22 18:34:35.133849] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.192 [2024-07-22 18:34:35.134235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.192 [2024-07-22 18:34:35.134273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.192 [2024-07-22 18:34:35.140471] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.192 [2024-07-22 18:34:35.140826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.192 [2024-07-22 18:34:35.140864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.192 [2024-07-22 18:34:35.146979] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.192 [2024-07-22 18:34:35.147342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.193 [2024-07-22 18:34:35.147382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.193 [2024-07-22 18:34:35.153648] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.193 [2024-07-22 18:34:35.154021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.193 [2024-07-22 18:34:35.154060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.193 [2024-07-22 18:34:35.160169] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.193 [2024-07-22 18:34:35.160545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.193 [2024-07-22 18:34:35.160584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.193 [2024-07-22 18:34:35.166700] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.193 [2024-07-22 18:34:35.167044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.193 [2024-07-22 18:34:35.167084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.193 [2024-07-22 18:34:35.173194] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.193 [2024-07-22 18:34:35.173558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.193 [2024-07-22 18:34:35.173596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.193 [2024-07-22 18:34:35.179684] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.193 [2024-07-22 18:34:35.180038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.193 [2024-07-22 18:34:35.180076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.193 [2024-07-22 18:34:35.186319] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.193 [2024-07-22 18:34:35.187035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.193 [2024-07-22 18:34:35.187144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.193 [2024-07-22 18:34:35.194750] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.193 [2024-07-22 18:34:35.195116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.193 [2024-07-22 18:34:35.195251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.193 [2024-07-22 18:34:35.202718] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.193 [2024-07-22 18:34:35.202838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.193 [2024-07-22 18:34:35.202886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.472 [2024-07-22 18:34:35.208964] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.472 [2024-07-22 18:34:35.209058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.472 [2024-07-22 18:34:35.209109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.472 [2024-07-22 18:34:35.215683] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.472 [2024-07-22 18:34:35.215785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.472 [2024-07-22 18:34:35.215826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.472 [2024-07-22 18:34:35.222369] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.472 [2024-07-22 18:34:35.222532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.472 [2024-07-22 18:34:35.222574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.472 [2024-07-22 18:34:35.228883] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.472 [2024-07-22 18:34:35.228994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.472 [2024-07-22 18:34:35.229046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.472 [2024-07-22 18:34:35.235494] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.472 [2024-07-22 18:34:35.235627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.472 [2024-07-22 18:34:35.235668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.472 [2024-07-22 18:34:35.242166] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.472 [2024-07-22 18:34:35.242287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.472 [2024-07-22 18:34:35.242329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.472 [2024-07-22 18:34:35.248907] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.472 [2024-07-22 18:34:35.249008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.472 [2024-07-22 18:34:35.249060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.473 [2024-07-22 18:34:35.255603] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.473 [2024-07-22 18:34:35.255703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.473 [2024-07-22 18:34:35.255746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.473 [2024-07-22 18:34:35.262314] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.473 [2024-07-22 18:34:35.262430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.473 [2024-07-22 18:34:35.262478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.473 [2024-07-22 18:34:35.269027] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.473 [2024-07-22 18:34:35.269132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.473 [2024-07-22 18:34:35.269222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.473 [2024-07-22 18:34:35.275853] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.473 [2024-07-22 18:34:35.275959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.473 [2024-07-22 18:34:35.276001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.473 [2024-07-22 18:34:35.282560] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.473 [2024-07-22 18:34:35.282675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.473 [2024-07-22 18:34:35.282714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.473 [2024-07-22 18:34:35.289418] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.473 [2024-07-22 18:34:35.289514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.473 [2024-07-22 18:34:35.289564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.473 [2024-07-22 18:34:35.296288] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.473 [2024-07-22 18:34:35.296391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.473 [2024-07-22 18:34:35.296430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.473 [2024-07-22 18:34:35.303185] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.473 [2024-07-22 18:34:35.303294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.473 [2024-07-22 18:34:35.303340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.473 [2024-07-22 18:34:35.310131] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.473 [2024-07-22 18:34:35.310243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.473 [2024-07-22 18:34:35.310293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.473 [2024-07-22 18:34:35.317065] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.473 [2024-07-22 18:34:35.317226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.473 [2024-07-22 18:34:35.317267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.473 [2024-07-22 18:34:35.324409] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.473 [2024-07-22 18:34:35.324511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.473 [2024-07-22 18:34:35.324558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.473 [2024-07-22 18:34:35.331485] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.473 [2024-07-22 18:34:35.331583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.473 [2024-07-22 18:34:35.331634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.473 [2024-07-22 18:34:35.338676] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.473 [2024-07-22 18:34:35.338802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.473 [2024-07-22 18:34:35.338841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.473 [2024-07-22 18:34:35.345407] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.473 [2024-07-22 18:34:35.345504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.473 [2024-07-22 18:34:35.345552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.473 [2024-07-22 18:34:35.352203] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.473 [2024-07-22 18:34:35.352454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.473 [2024-07-22 18:34:35.352503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.473 [2024-07-22 18:34:35.359475] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.473 [2024-07-22 18:34:35.359622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.473 [2024-07-22 18:34:35.359661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.473 [2024-07-22 18:34:35.366483] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.473 [2024-07-22 18:34:35.366602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.473 [2024-07-22 18:34:35.366651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.473 [2024-07-22 18:34:35.373591] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.473 [2024-07-22 18:34:35.373705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.473 [2024-07-22 18:34:35.373786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.473 [2024-07-22 18:34:35.380502] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.473 [2024-07-22 18:34:35.380946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.473 [2024-07-22 18:34:35.381011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.473 [2024-07-22 18:34:35.388190] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.473 [2024-07-22 18:34:35.388453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.473 [2024-07-22 18:34:35.388552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.473 [2024-07-22 18:34:35.395710] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.473 [2024-07-22 18:34:35.395836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.473 [2024-07-22 18:34:35.395876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.473 [2024-07-22 18:34:35.402142] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.473 [2024-07-22 18:34:35.402311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.473 [2024-07-22 18:34:35.402351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.473 [2024-07-22 18:34:35.408815] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.473 [2024-07-22 18:34:35.408930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.473 [2024-07-22 18:34:35.408970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.473 [2024-07-22 18:34:35.415314] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.473 [2024-07-22 18:34:35.415417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.473 [2024-07-22 18:34:35.415456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.473 [2024-07-22 18:34:35.421949] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.473 [2024-07-22 18:34:35.422039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.473 [2024-07-22 18:34:35.422079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.473 [2024-07-22 18:34:35.428626] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.473 [2024-07-22 18:34:35.428737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.473 [2024-07-22 18:34:35.428776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.473 [2024-07-22 18:34:35.435380] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.474 [2024-07-22 18:34:35.435489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.474 [2024-07-22 18:34:35.435529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.474 [2024-07-22 18:34:35.442333] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.474 [2024-07-22 18:34:35.442432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.474 [2024-07-22 18:34:35.442486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.474 [2024-07-22 18:34:35.449314] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.474 [2024-07-22 18:34:35.449456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.474 [2024-07-22 18:34:35.449497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.474 [2024-07-22 18:34:35.456391] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.474 [2024-07-22 18:34:35.456497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.474 [2024-07-22 18:34:35.456536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.474 [2024-07-22 18:34:35.463559] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.474 [2024-07-22 18:34:35.463674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.474 [2024-07-22 18:34:35.463714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.474 [2024-07-22 18:34:35.470481] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.474 [2024-07-22 18:34:35.470588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.474 [2024-07-22 18:34:35.470627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.474 [2024-07-22 18:34:35.477365] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.474 [2024-07-22 18:34:35.477485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.474 [2024-07-22 18:34:35.477524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.733 [2024-07-22 18:34:35.484376] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.733 [2024-07-22 18:34:35.484543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.733 [2024-07-22 18:34:35.484582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.733 [2024-07-22 18:34:35.491337] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.733 [2024-07-22 18:34:35.491450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.733 [2024-07-22 18:34:35.491489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.733 [2024-07-22 18:34:35.498326] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.733 [2024-07-22 18:34:35.498431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.733 [2024-07-22 18:34:35.498472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.733 [2024-07-22 18:34:35.505677] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.733 [2024-07-22 18:34:35.505780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.733 [2024-07-22 18:34:35.505843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.733 [2024-07-22 18:34:35.512795] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.734 [2024-07-22 18:34:35.512915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.734 [2024-07-22 18:34:35.512956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.734 [2024-07-22 18:34:35.520113] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.734 [2024-07-22 18:34:35.520248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.734 [2024-07-22 18:34:35.520297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.734 [2024-07-22 18:34:35.527378] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.734 [2024-07-22 18:34:35.527481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.734 [2024-07-22 18:34:35.527521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.734 [2024-07-22 18:34:35.534631] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.734 [2024-07-22 18:34:35.534755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.734 [2024-07-22 18:34:35.534793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.734 [2024-07-22 18:34:35.541724] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.734 [2024-07-22 18:34:35.541821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.734 [2024-07-22 18:34:35.541861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.734 [2024-07-22 18:34:35.548638] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.734 [2024-07-22 18:34:35.548755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.734 [2024-07-22 18:34:35.548796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.734 [2024-07-22 18:34:35.555724] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.734 [2024-07-22 18:34:35.555817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.734 [2024-07-22 18:34:35.555857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.734 [2024-07-22 18:34:35.562608] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.734 [2024-07-22 18:34:35.562721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.734 [2024-07-22 18:34:35.562760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.734 [2024-07-22 18:34:35.569617] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.734 [2024-07-22 18:34:35.569750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.734 [2024-07-22 18:34:35.569790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.734 [2024-07-22 18:34:35.576714] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.734 [2024-07-22 18:34:35.576821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.734 [2024-07-22 18:34:35.576859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.734 [2024-07-22 18:34:35.583536] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.734 [2024-07-22 18:34:35.583683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.734 [2024-07-22 18:34:35.583748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.734 [2024-07-22 18:34:35.590676] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.734 [2024-07-22 18:34:35.590805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.734 [2024-07-22 18:34:35.590845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.734 [2024-07-22 18:34:35.597652] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.734 [2024-07-22 18:34:35.597774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.734 [2024-07-22 18:34:35.597813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.734 [2024-07-22 18:34:35.604605] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.734 [2024-07-22 18:34:35.604701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.734 [2024-07-22 18:34:35.604741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.734 [2024-07-22 18:34:35.611501] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.734 [2024-07-22 18:34:35.611619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.734 [2024-07-22 18:34:35.611659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.734 [2024-07-22 18:34:35.618713] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.734 [2024-07-22 18:34:35.618831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.734 [2024-07-22 18:34:35.618869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.734 [2024-07-22 18:34:35.625678] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.734 [2024-07-22 18:34:35.625784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.734 [2024-07-22 18:34:35.625822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.734 [2024-07-22 18:34:35.632631] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.734 [2024-07-22 18:34:35.632749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.734 [2024-07-22 18:34:35.632798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.734 [2024-07-22 18:34:35.639456] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.734 [2024-07-22 18:34:35.639552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.734 [2024-07-22 18:34:35.639597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.734 [2024-07-22 18:34:35.646669] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.734 [2024-07-22 18:34:35.646802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.734 [2024-07-22 18:34:35.646833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.734 [2024-07-22 18:34:35.653133] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.734 [2024-07-22 18:34:35.653296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.734 [2024-07-22 18:34:35.653341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.734 [2024-07-22 18:34:35.659742] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.734 [2024-07-22 18:34:35.659831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.734 [2024-07-22 18:34:35.659863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.734 [2024-07-22 18:34:35.666312] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.734 [2024-07-22 18:34:35.666415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.734 [2024-07-22 18:34:35.666447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.734 [2024-07-22 18:34:35.672816] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.734 [2024-07-22 18:34:35.672920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.734 [2024-07-22 18:34:35.672951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.734 [2024-07-22 18:34:35.679322] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.734 [2024-07-22 18:34:35.679410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.734 [2024-07-22 18:34:35.679442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.734 [2024-07-22 18:34:35.686577] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.734 [2024-07-22 18:34:35.686684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.734 [2024-07-22 18:34:35.686718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.734 [2024-07-22 18:34:35.693300] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.734 [2024-07-22 18:34:35.693392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.734 [2024-07-22 18:34:35.693425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.735 [2024-07-22 18:34:35.699891] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.735 [2024-07-22 18:34:35.699986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.735 [2024-07-22 18:34:35.700019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.735 [2024-07-22 18:34:35.706687] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.735 [2024-07-22 18:34:35.706791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.735 [2024-07-22 18:34:35.706830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.735 [2024-07-22 18:34:35.714611] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.735 [2024-07-22 18:34:35.714716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.735 [2024-07-22 18:34:35.714756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.735 [2024-07-22 18:34:35.721592] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.735 [2024-07-22 18:34:35.721697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.735 [2024-07-22 18:34:35.721730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.735 [2024-07-22 18:34:35.728441] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.735 [2024-07-22 18:34:35.728560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.735 [2024-07-22 18:34:35.728596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.735 [2024-07-22 18:34:35.735415] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.735 [2024-07-22 18:34:35.735531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.735 [2024-07-22 18:34:35.735565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.735 [2024-07-22 18:34:35.742214] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.735 [2024-07-22 18:34:35.742312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.735 [2024-07-22 18:34:35.742344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.735 [2024-07-22 18:34:35.748967] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.735 [2024-07-22 18:34:35.749076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.735 [2024-07-22 18:34:35.749108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.995 [2024-07-22 18:34:35.755682] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.995 [2024-07-22 18:34:35.755781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.995 [2024-07-22 18:34:35.755812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.995 [2024-07-22 18:34:35.762345] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.995 [2024-07-22 18:34:35.762456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.995 [2024-07-22 18:34:35.762489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.995 [2024-07-22 18:34:35.769106] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.995 [2024-07-22 18:34:35.769202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.995 [2024-07-22 18:34:35.769251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.995 [2024-07-22 18:34:35.775633] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.995 [2024-07-22 18:34:35.775737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.995 [2024-07-22 18:34:35.775768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.995 [2024-07-22 18:34:35.782287] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.995 [2024-07-22 18:34:35.782381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.995 [2024-07-22 18:34:35.782427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.995 [2024-07-22 18:34:35.788768] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.995 [2024-07-22 18:34:35.788881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.995 [2024-07-22 18:34:35.788912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.995 [2024-07-22 18:34:35.795270] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.995 [2024-07-22 18:34:35.795380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.995 [2024-07-22 18:34:35.795412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.995 [2024-07-22 18:34:35.802343] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.995 [2024-07-22 18:34:35.802447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.995 [2024-07-22 18:34:35.802492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.995 [2024-07-22 18:34:35.808788] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.995 [2024-07-22 18:34:35.808891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.995 [2024-07-22 18:34:35.808920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.995 [2024-07-22 18:34:35.814974] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.995 [2024-07-22 18:34:35.815076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.995 [2024-07-22 18:34:35.815105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.995 [2024-07-22 18:34:35.821069] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.995 [2024-07-22 18:34:35.821165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.995 [2024-07-22 18:34:35.821195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.995 [2024-07-22 18:34:35.827254] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.995 [2024-07-22 18:34:35.827355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.995 [2024-07-22 18:34:35.827384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.995 [2024-07-22 18:34:35.833243] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.995 [2024-07-22 18:34:35.833356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.995 [2024-07-22 18:34:35.833385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.995 [2024-07-22 18:34:35.839921] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.995 [2024-07-22 18:34:35.840060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.995 [2024-07-22 18:34:35.840091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.995 [2024-07-22 18:34:35.846319] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.995 [2024-07-22 18:34:35.846430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.995 [2024-07-22 18:34:35.846459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.995 [2024-07-22 18:34:35.852326] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.995 [2024-07-22 18:34:35.852433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.995 [2024-07-22 18:34:35.852463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.995 [2024-07-22 18:34:35.858554] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.995 [2024-07-22 18:34:35.858652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.995 [2024-07-22 18:34:35.858682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.995 [2024-07-22 18:34:35.864678] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.995 [2024-07-22 18:34:35.864797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.995 [2024-07-22 18:34:35.864828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.995 [2024-07-22 18:34:35.870856] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.995 [2024-07-22 18:34:35.870956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.995 [2024-07-22 18:34:35.870986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.995 [2024-07-22 18:34:35.876829] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.995 [2024-07-22 18:34:35.876939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.996 [2024-07-22 18:34:35.876969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.996 [2024-07-22 18:34:35.883191] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.996 [2024-07-22 18:34:35.883374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.996 [2024-07-22 18:34:35.883406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.996 [2024-07-22 18:34:35.890065] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.996 [2024-07-22 18:34:35.890155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.996 [2024-07-22 18:34:35.890187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.996 [2024-07-22 18:34:35.896100] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.996 [2024-07-22 18:34:35.896208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.996 [2024-07-22 18:34:35.896252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.996 [2024-07-22 18:34:35.902807] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.996 [2024-07-22 18:34:35.902914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.996 [2024-07-22 18:34:35.902946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.996 [2024-07-22 18:34:35.909089] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.996 [2024-07-22 18:34:35.909208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.996 [2024-07-22 18:34:35.909252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.996 [2024-07-22 18:34:35.915426] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.996 [2024-07-22 18:34:35.915531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.996 [2024-07-22 18:34:35.915562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.996 [2024-07-22 18:34:35.922239] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.996 [2024-07-22 18:34:35.922361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.996 [2024-07-22 18:34:35.922404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.996 [2024-07-22 18:34:35.928744] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.996 [2024-07-22 18:34:35.928851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.996 [2024-07-22 18:34:35.928882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.996 [2024-07-22 18:34:35.935380] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.996 [2024-07-22 18:34:35.935465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.996 [2024-07-22 18:34:35.935513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.996 [2024-07-22 18:34:35.942166] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.996 [2024-07-22 18:34:35.942278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.996 [2024-07-22 18:34:35.942310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.996 [2024-07-22 18:34:35.948883] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.996 [2024-07-22 18:34:35.948984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.996 [2024-07-22 18:34:35.949013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.996 [2024-07-22 18:34:35.955689] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.996 [2024-07-22 18:34:35.955797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.996 [2024-07-22 18:34:35.955827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.996 [2024-07-22 18:34:35.962675] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.996 [2024-07-22 18:34:35.962768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.996 [2024-07-22 18:34:35.962798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.996 [2024-07-22 18:34:35.969300] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.996 [2024-07-22 18:34:35.969436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.996 [2024-07-22 18:34:35.969466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.996 [2024-07-22 18:34:35.976057] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.996 [2024-07-22 18:34:35.976198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.996 [2024-07-22 18:34:35.976244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.996 [2024-07-22 18:34:35.982644] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.996 [2024-07-22 18:34:35.982761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.996 [2024-07-22 18:34:35.982793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.996 [2024-07-22 18:34:35.988917] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.996 [2024-07-22 18:34:35.989045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.996 [2024-07-22 18:34:35.989077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.996 [2024-07-22 18:34:35.995306] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.996 [2024-07-22 18:34:35.995412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.996 [2024-07-22 18:34:35.995442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.996 [2024-07-22 18:34:36.001711] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.996 [2024-07-22 18:34:36.001837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.996 [2024-07-22 18:34:36.001869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.996 [2024-07-22 18:34:36.008883] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:23.996 [2024-07-22 18:34:36.009005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.996 [2024-07-22 18:34:36.009040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.256 [2024-07-22 18:34:36.016162] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:24.256 [2024-07-22 18:34:36.016319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.256 [2024-07-22 18:34:36.016355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.256 [2024-07-22 18:34:36.023443] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:24.256 [2024-07-22 18:34:36.023551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.256 [2024-07-22 18:34:36.023587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.256 [2024-07-22 18:34:36.030335] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:24.256 [2024-07-22 18:34:36.030462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.256 [2024-07-22 18:34:36.030496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.256 [2024-07-22 18:34:36.037150] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:24.256 [2024-07-22 18:34:36.037297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.256 [2024-07-22 18:34:36.037334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.256 [2024-07-22 18:34:36.043931] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:24.256 [2024-07-22 18:34:36.044062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.256 [2024-07-22 18:34:36.044093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.256 [2024-07-22 18:34:36.050182] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:24.256 [2024-07-22 18:34:36.050325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.256 [2024-07-22 18:34:36.050357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.256 [2024-07-22 18:34:36.056882] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:24.256 [2024-07-22 18:34:36.056998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.256 [2024-07-22 18:34:36.057028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.256 [2024-07-22 18:34:36.063610] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:24.256 [2024-07-22 18:34:36.063723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.256 [2024-07-22 18:34:36.063755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.256 [2024-07-22 18:34:36.070188] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:24.256 [2024-07-22 18:34:36.070336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.256 [2024-07-22 18:34:36.070366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.256 [2024-07-22 18:34:36.076739] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:24.256 [2024-07-22 18:34:36.076848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.256 [2024-07-22 18:34:36.076883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.256 [2024-07-22 18:34:36.083724] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:24.256 [2024-07-22 18:34:36.083833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.256 [2024-07-22 18:34:36.083865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.256 [2024-07-22 18:34:36.090874] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:24.256 [2024-07-22 18:34:36.091010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.256 [2024-07-22 18:34:36.091041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.256 [2024-07-22 18:34:36.098093] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:24.256 [2024-07-22 18:34:36.098196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.256 [2024-07-22 18:34:36.098243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.256 [2024-07-22 18:34:36.105167] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:24.256 [2024-07-22 18:34:36.105300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.256 [2024-07-22 18:34:36.105351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.256 [2024-07-22 18:34:36.112009] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:24.256 [2024-07-22 18:34:36.112128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.256 [2024-07-22 18:34:36.112176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.256 [2024-07-22 18:34:36.119126] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:24.256 [2024-07-22 18:34:36.119246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.256 [2024-07-22 18:34:36.119283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.256 [2024-07-22 18:34:36.126062] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:24.256 [2024-07-22 18:34:36.126157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.256 [2024-07-22 18:34:36.126189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.256 [2024-07-22 18:34:36.132839] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:24.256 [2024-07-22 18:34:36.132950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.256 [2024-07-22 18:34:36.132978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.256 [2024-07-22 18:34:36.139503] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:24.256 [2024-07-22 18:34:36.139601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.256 [2024-07-22 18:34:36.139630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.256 [2024-07-22 18:34:36.146361] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:24.256 [2024-07-22 18:34:36.146478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.256 [2024-07-22 18:34:36.146508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.256 [2024-07-22 18:34:36.153144] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:24.257 [2024-07-22 18:34:36.153291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.257 [2024-07-22 18:34:36.153328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.257 [2024-07-22 18:34:36.160272] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:24.257 [2024-07-22 18:34:36.160397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.257 [2024-07-22 18:34:36.160434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.257 [2024-07-22 18:34:36.167303] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:24.257 [2024-07-22 18:34:36.167411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.257 [2024-07-22 18:34:36.167444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.257 [2024-07-22 18:34:36.174459] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:24.257 [2024-07-22 18:34:36.174586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.257 [2024-07-22 18:34:36.174630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.257 [2024-07-22 18:34:36.181500] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:24.257 [2024-07-22 18:34:36.181586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.257 [2024-07-22 18:34:36.181618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.257 [2024-07-22 18:34:36.188455] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:24.257 [2024-07-22 18:34:36.188550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.257 [2024-07-22 18:34:36.188597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.257 [2024-07-22 18:34:36.195650] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:24.257 [2024-07-22 18:34:36.195763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.257 [2024-07-22 18:34:36.195793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.257 [2024-07-22 18:34:36.202686] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:24.257 [2024-07-22 18:34:36.202786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.257 [2024-07-22 18:34:36.202820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.257 [2024-07-22 18:34:36.209476] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:24.257 [2024-07-22 18:34:36.209576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.257 [2024-07-22 18:34:36.209605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.257 [2024-07-22 18:34:36.216619] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:24.257 [2024-07-22 18:34:36.216710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.257 [2024-07-22 18:34:36.216741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.257 [2024-07-22 18:34:36.223263] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:24.257 [2024-07-22 18:34:36.223373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.257 [2024-07-22 18:34:36.223404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.257 [2024-07-22 18:34:36.230353] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:24.257 [2024-07-22 18:34:36.230456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.257 [2024-07-22 18:34:36.230498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.257 [2024-07-22 18:34:36.237232] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:24.257 [2024-07-22 18:34:36.237367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.257 [2024-07-22 18:34:36.237441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.257 [2024-07-22 18:34:36.244639] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:24.257 [2024-07-22 18:34:36.244735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.257 [2024-07-22 18:34:36.244765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.257 [2024-07-22 18:34:36.251025] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:24.257 [2024-07-22 18:34:36.251191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.257 [2024-07-22 18:34:36.251221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.257 [2024-07-22 18:34:36.257643] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:24.257 [2024-07-22 18:34:36.257735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.257 [2024-07-22 18:34:36.257764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.257 [2024-07-22 18:34:36.264298] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:24.257 [2024-07-22 18:34:36.264422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.257 [2024-07-22 18:34:36.264467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.257 [2024-07-22 18:34:36.271171] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:24.257 [2024-07-22 18:34:36.271319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.257 [2024-07-22 18:34:36.271351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.516 [2024-07-22 18:34:36.278157] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:24.516 [2024-07-22 18:34:36.278255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.516 [2024-07-22 18:34:36.278288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.516 [2024-07-22 18:34:36.284931] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:24.516 [2024-07-22 18:34:36.285028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.516 [2024-07-22 18:34:36.285087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.516 [2024-07-22 18:34:36.291668] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:24.516 [2024-07-22 18:34:36.291759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.516 [2024-07-22 18:34:36.291791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:24.516 [2024-07-22 18:34:36.298545] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:24.516 [2024-07-22 18:34:36.298681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.516 [2024-07-22 18:34:36.298711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:24.516 [2024-07-22 18:34:36.305554] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:24.516 [2024-07-22 18:34:36.305685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.516 [2024-07-22 18:34:36.305731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:24.516 [2024-07-22 18:34:36.312536] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:27:24.516 [2024-07-22 18:34:36.312631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.516 [2024-07-22 18:34:36.312664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.516 00:27:24.516 Latency(us) 00:27:24.516 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:24.516 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:24.516 nvme0n1 : 2.00 4591.28 573.91 0.00 0.00 3475.71 2532.07 8460.10 00:27:24.516 =================================================================================================================== 00:27:24.516 Total : 4591.28 573.91 0.00 0.00 3475.71 2532.07 8460.10 00:27:24.516 0 00:27:24.516 18:34:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:24.516 18:34:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:24.516 | .driver_specific 00:27:24.516 | .nvme_error 00:27:24.516 | .status_code 00:27:24.516 | .command_transient_transport_error' 00:27:24.516 18:34:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:24.516 18:34:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:24.775 18:34:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 296 > 0 )) 00:27:24.775 18:34:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 87765 00:27:24.775 18:34:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 87765 ']' 00:27:24.775 18:34:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 87765 00:27:24.775 18:34:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:27:24.775 18:34:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:24.775 18:34:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87765 00:27:24.775 killing process with pid 87765 00:27:24.775 Received shutdown signal, test time was about 2.000000 seconds 00:27:24.775 00:27:24.775 Latency(us) 00:27:24.775 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:24.775 =================================================================================================================== 00:27:24.775 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:24.775 18:34:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:24.775 18:34:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:24.775 18:34:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87765' 00:27:24.775 18:34:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 87765 00:27:24.775 18:34:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 87765 00:27:26.189 18:34:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 87525 00:27:26.189 18:34:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 87525 ']' 00:27:26.189 18:34:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 87525 00:27:26.189 18:34:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:27:26.189 18:34:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:26.189 18:34:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87525 00:27:26.189 killing process with pid 87525 00:27:26.189 18:34:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:26.189 18:34:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:26.189 18:34:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87525' 00:27:26.189 18:34:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 87525 00:27:26.189 18:34:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 87525 00:27:27.123 00:27:27.123 real 0m23.742s 00:27:27.123 user 0m45.082s 00:27:27.123 sys 0m5.018s 00:27:27.123 18:34:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:27.123 ************************************ 00:27:27.123 END TEST nvmf_digest_error 00:27:27.123 ************************************ 00:27:27.123 18:34:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:27.382 18:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:27:27.382 18:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:27:27.382 18:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:27:27.382 18:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:27.382 18:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:27:27.382 18:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:27.382 18:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:27:27.382 18:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:27.382 18:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:27.382 rmmod nvme_tcp 00:27:27.382 rmmod nvme_fabrics 00:27:27.382 rmmod nvme_keyring 00:27:27.382 18:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:27.382 Process with pid 87525 is not found 00:27:27.382 18:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:27:27.382 18:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:27:27.382 18:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 87525 ']' 00:27:27.382 18:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 87525 00:27:27.382 18:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 87525 ']' 00:27:27.382 18:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 87525 00:27:27.382 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (87525) - No such process 00:27:27.382 18:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 87525 is not found' 00:27:27.382 18:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:27.382 18:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:27.382 18:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:27.382 18:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:27.382 18:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:27.382 18:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:27.382 18:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:27.382 18:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:27.382 18:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:27.382 00:27:27.382 real 0m49.396s 00:27:27.382 user 1m32.687s 00:27:27.382 sys 0m10.410s 00:27:27.382 18:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:27.382 18:34:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:27.382 ************************************ 00:27:27.382 END TEST nvmf_digest 00:27:27.382 ************************************ 00:27:27.382 18:34:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:27:27.382 18:34:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:27:27.382 18:34:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:27:27.382 18:34:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:27:27.382 18:34:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:27.382 18:34:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:27.382 18:34:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.382 ************************************ 00:27:27.382 START TEST nvmf_host_multipath 00:27:27.382 ************************************ 00:27:27.382 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:27:27.686 * Looking for test storage... 00:27:27.686 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:27.686 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:27.686 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:27:27.686 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:27.686 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:27.686 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:27.686 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:27.686 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:27.686 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:27.686 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:27.686 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:27.686 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:27.686 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:27.686 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:27:27.686 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=1e224894-a0fc-4112-b81b-a37606f50c96 00:27:27.686 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:27.686 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:27.686 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:27.686 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:27.686 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:27.686 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:27.686 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:27.686 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:27.686 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.686 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.686 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.686 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:27:27.686 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.686 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:27:27.686 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:27:27.687 Cannot find device "nvmf_tgt_br" 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:27:27.687 Cannot find device "nvmf_tgt_br2" 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:27:27.687 Cannot find device "nvmf_tgt_br" 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:27:27.687 Cannot find device "nvmf_tgt_br2" 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:27.687 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:27.687 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:27:27.687 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:27:27.946 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:27.946 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:27.946 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:27.946 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:27:27.946 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:27:27.946 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:27:27.946 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:27.946 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:27.946 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:27.946 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:27.946 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:27:27.946 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:27.946 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:27:27.946 00:27:27.946 --- 10.0.0.2 ping statistics --- 00:27:27.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:27.946 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:27:27.946 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:27:27.946 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:27.946 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:27:27.946 00:27:27.946 --- 10.0.0.3 ping statistics --- 00:27:27.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:27.946 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:27:27.946 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:27.946 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:27.946 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:27:27.946 00:27:27.946 --- 10.0.0.1 ping statistics --- 00:27:27.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:27.946 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:27:27.946 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:27.946 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:27:27.946 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:27.946 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:27.946 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:27.946 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:27.946 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:27.946 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:27.946 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:27.946 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:27:27.946 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:27.946 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:27.946 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:27:27.946 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=88050 00:27:27.946 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:27:27.946 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 88050 00:27:27.946 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 88050 ']' 00:27:27.946 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:27.946 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:27.946 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:27.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:27.946 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:27.946 18:34:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:27:27.946 [2024-07-22 18:34:39.935755] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:27:27.946 [2024-07-22 18:34:39.936162] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:28.206 [2024-07-22 18:34:40.110681] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:28.463 [2024-07-22 18:34:40.403615] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:28.463 [2024-07-22 18:34:40.404001] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:28.463 [2024-07-22 18:34:40.404155] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:28.463 [2024-07-22 18:34:40.404323] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:28.464 [2024-07-22 18:34:40.404370] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:28.464 [2024-07-22 18:34:40.404690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:28.464 [2024-07-22 18:34:40.404991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:28.722 [2024-07-22 18:34:40.613240] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:27:28.981 18:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:28.981 18:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:27:28.981 18:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:28.981 18:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:28.981 18:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:27:28.981 18:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:28.981 18:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=88050 00:27:28.981 18:34:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:29.240 [2024-07-22 18:34:41.228341] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:29.240 18:34:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:29.807 Malloc0 00:27:29.807 18:34:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:27:30.065 18:34:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:30.323 18:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:30.582 [2024-07-22 18:34:42.352799] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:30.582 18:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:30.845 [2024-07-22 18:34:42.640991] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:30.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:30.845 18:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=88106 00:27:30.845 18:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:27:30.845 18:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:30.845 18:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 88106 /var/tmp/bdevperf.sock 00:27:30.845 18:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 88106 ']' 00:27:30.845 18:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:30.845 18:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:30.845 18:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:30.845 18:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:30.845 18:34:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:27:31.807 18:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:31.807 18:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:27:31.807 18:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:27:32.065 18:34:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:27:32.323 Nvme0n1 00:27:32.323 18:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:32.889 Nvme0n1 00:27:32.889 18:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:27:32.889 18:34:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:27:33.822 18:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:27:33.822 18:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:34.080 18:34:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:34.337 18:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:27:34.337 18:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=88151 00:27:34.337 18:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88050 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:34.337 18:34:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:27:40.919 18:34:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:40.919 18:34:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:27:40.919 18:34:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:27:40.919 18:34:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:40.919 Attaching 4 probes... 00:27:40.919 @path[10.0.0.2, 4421]: 12985 00:27:40.919 @path[10.0.0.2, 4421]: 13445 00:27:40.919 @path[10.0.0.2, 4421]: 13808 00:27:40.919 @path[10.0.0.2, 4421]: 13902 00:27:40.919 @path[10.0.0.2, 4421]: 13579 00:27:40.919 18:34:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:27:40.919 18:34:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:40.919 18:34:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:27:40.919 18:34:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:27:40.919 18:34:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:27:40.919 18:34:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:27:40.919 18:34:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 88151 00:27:40.919 18:34:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:40.919 18:34:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:27:40.919 18:34:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:40.919 18:34:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:41.180 18:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:27:41.180 18:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=88264 00:27:41.180 18:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:27:41.180 18:34:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88050 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:47.742 18:34:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:27:47.742 18:34:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:47.742 18:34:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:27:47.742 18:34:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:47.742 Attaching 4 probes... 00:27:47.742 @path[10.0.0.2, 4420]: 13819 00:27:47.742 @path[10.0.0.2, 4420]: 14030 00:27:47.742 @path[10.0.0.2, 4420]: 14029 00:27:47.742 @path[10.0.0.2, 4420]: 13776 00:27:47.742 @path[10.0.0.2, 4420]: 14026 00:27:47.742 18:34:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:47.742 18:34:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:27:47.742 18:34:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:27:47.742 18:34:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:27:47.742 18:34:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:27:47.742 18:34:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:27:47.742 18:34:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 88264 00:27:47.742 18:34:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:47.742 18:34:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:27:47.742 18:34:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:47.742 18:34:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:48.002 18:34:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:27:48.002 18:34:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88050 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:48.002 18:34:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=88377 00:27:48.002 18:34:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:27:54.598 18:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:27:54.598 18:35:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:54.598 18:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:27:54.598 18:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:54.598 Attaching 4 probes... 00:27:54.598 @path[10.0.0.2, 4421]: 10182 00:27:54.598 @path[10.0.0.2, 4421]: 13361 00:27:54.598 @path[10.0.0.2, 4421]: 13463 00:27:54.598 @path[10.0.0.2, 4421]: 13645 00:27:54.598 @path[10.0.0.2, 4421]: 13473 00:27:54.598 18:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:54.598 18:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:27:54.598 18:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:27:54.598 18:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:27:54.598 18:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:27:54.598 18:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:27:54.598 18:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 88377 00:27:54.598 18:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:54.598 18:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:27:54.598 18:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:54.598 18:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:54.857 18:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:27:54.857 18:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=88484 00:27:54.857 18:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88050 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:54.857 18:35:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:28:01.426 18:35:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:01.426 18:35:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:28:01.426 18:35:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:28:01.426 18:35:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:01.426 Attaching 4 probes... 00:28:01.426 00:28:01.426 00:28:01.426 00:28:01.426 00:28:01.426 00:28:01.426 18:35:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:01.426 18:35:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:28:01.426 18:35:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:28:01.427 18:35:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:28:01.427 18:35:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:28:01.427 18:35:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:28:01.427 18:35:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 88484 00:28:01.427 18:35:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:01.427 18:35:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:28:01.427 18:35:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:01.427 18:35:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:01.687 18:35:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:28:01.687 18:35:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88050 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:28:01.687 18:35:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=88601 00:28:01.687 18:35:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:28:08.414 18:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:08.414 18:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:28:08.414 18:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:28:08.414 18:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:08.414 Attaching 4 probes... 00:28:08.414 @path[10.0.0.2, 4421]: 13342 00:28:08.414 @path[10.0.0.2, 4421]: 13312 00:28:08.414 @path[10.0.0.2, 4421]: 13552 00:28:08.414 @path[10.0.0.2, 4421]: 13434 00:28:08.414 @path[10.0.0.2, 4421]: 13380 00:28:08.414 18:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:08.414 18:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:28:08.414 18:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:28:08.414 18:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:28:08.414 18:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:28:08.414 18:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:28:08.414 18:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 88601 00:28:08.414 18:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:08.414 18:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:08.414 18:35:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:28:09.347 18:35:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:28:09.347 18:35:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=88724 00:28:09.347 18:35:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88050 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:28:09.348 18:35:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:28:15.914 18:35:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:28:15.914 18:35:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:15.914 18:35:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:28:15.914 18:35:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:15.914 Attaching 4 probes... 00:28:15.914 @path[10.0.0.2, 4420]: 13551 00:28:15.914 @path[10.0.0.2, 4420]: 14055 00:28:15.914 @path[10.0.0.2, 4420]: 14051 00:28:15.914 @path[10.0.0.2, 4420]: 14065 00:28:15.914 @path[10.0.0.2, 4420]: 13977 00:28:15.914 18:35:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:15.914 18:35:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:28:15.914 18:35:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:28:15.914 18:35:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:28:15.914 18:35:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:28:15.914 18:35:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:28:15.914 18:35:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 88724 00:28:15.914 18:35:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:15.914 18:35:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:15.914 [2024-07-22 18:35:27.572063] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:15.914 18:35:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:15.914 18:35:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:28:22.503 18:35:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:28:22.503 18:35:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=88889 00:28:22.503 18:35:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88050 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:28:22.503 18:35:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:28:29.059 18:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:29.059 18:35:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:28:29.059 18:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:28:29.059 18:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:29.059 Attaching 4 probes... 00:28:29.059 @path[10.0.0.2, 4421]: 13096 00:28:29.059 @path[10.0.0.2, 4421]: 13160 00:28:29.059 @path[10.0.0.2, 4421]: 12572 00:28:29.059 @path[10.0.0.2, 4421]: 13023 00:28:29.059 @path[10.0.0.2, 4421]: 12933 00:28:29.059 18:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:29.059 18:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:28:29.059 18:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:28:29.059 18:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:28:29.059 18:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:28:29.059 18:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:28:29.059 18:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 88889 00:28:29.059 18:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:29.059 18:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 88106 00:28:29.059 18:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 88106 ']' 00:28:29.059 18:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 88106 00:28:29.059 18:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:28:29.059 18:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:29.059 18:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88106 00:28:29.059 18:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:28:29.059 killing process with pid 88106 00:28:29.059 18:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:28:29.059 18:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88106' 00:28:29.059 18:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 88106 00:28:29.059 18:35:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 88106 00:28:29.059 Connection closed with partial response: 00:28:29.059 00:28:29.059 00:28:29.328 18:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 88106 00:28:29.328 18:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:28:29.328 [2024-07-22 18:34:42.774828] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:28:29.328 [2024-07-22 18:34:42.775105] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88106 ] 00:28:29.328 [2024-07-22 18:34:42.953466] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:29.328 [2024-07-22 18:34:43.197216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:29.328 [2024-07-22 18:34:43.398529] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:28:29.328 Running I/O for 90 seconds... 00:28:29.328 [2024-07-22 18:34:53.026810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:60856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.328 [2024-07-22 18:34:53.026950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:29.328 [2024-07-22 18:34:53.027061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:60864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.328 [2024-07-22 18:34:53.027092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:29.328 [2024-07-22 18:34:53.027128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:60872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.328 [2024-07-22 18:34:53.027151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:29.328 [2024-07-22 18:34:53.027183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:60880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.328 [2024-07-22 18:34:53.027219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:29.328 [2024-07-22 18:34:53.027256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:60888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.328 [2024-07-22 18:34:53.027278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:29.328 [2024-07-22 18:34:53.027309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:60896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.328 [2024-07-22 18:34:53.027331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:29.328 [2024-07-22 18:34:53.027362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:60904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.328 [2024-07-22 18:34:53.027384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:29.328 [2024-07-22 18:34:53.027415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:60912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.328 [2024-07-22 18:34:53.027437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:29.328 [2024-07-22 18:34:53.027468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:60920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.328 [2024-07-22 18:34:53.027489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:29.328 [2024-07-22 18:34:53.027521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:60928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.328 [2024-07-22 18:34:53.027542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:29.328 [2024-07-22 18:34:53.027572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:60936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.328 [2024-07-22 18:34:53.027614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:29.328 [2024-07-22 18:34:53.027648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:60944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.328 [2024-07-22 18:34:53.027670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:29.328 [2024-07-22 18:34:53.027702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:60952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.328 [2024-07-22 18:34:53.027723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:29.328 [2024-07-22 18:34:53.027757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:60960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.328 [2024-07-22 18:34:53.027779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:29.328 [2024-07-22 18:34:53.027810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:60968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.328 [2024-07-22 18:34:53.027831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:29.328 [2024-07-22 18:34:53.027862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:60976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.328 [2024-07-22 18:34:53.027884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:29.328 [2024-07-22 18:34:53.027915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:60408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.328 [2024-07-22 18:34:53.027937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:29.328 [2024-07-22 18:34:53.027969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:60416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.328 [2024-07-22 18:34:53.027991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:29.328 [2024-07-22 18:34:53.028022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:60424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.328 [2024-07-22 18:34:53.028045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:29.328 [2024-07-22 18:34:53.028077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:60432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.328 [2024-07-22 18:34:53.028098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.328 [2024-07-22 18:34:53.028129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:60440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.328 [2024-07-22 18:34:53.028151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:29.328 [2024-07-22 18:34:53.028181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:60448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.328 [2024-07-22 18:34:53.028203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:29.328 [2024-07-22 18:34:53.028249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:60456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.328 [2024-07-22 18:34:53.028282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:29.328 [2024-07-22 18:34:53.028315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:60464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.328 [2024-07-22 18:34:53.028338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:29.328 [2024-07-22 18:34:53.028369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:60472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.328 [2024-07-22 18:34:53.028391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:29.328 [2024-07-22 18:34:53.028434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:60480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.328 [2024-07-22 18:34:53.028459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:29.328 [2024-07-22 18:34:53.028490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:60488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.328 [2024-07-22 18:34:53.028512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:29.328 [2024-07-22 18:34:53.028543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:60496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.328 [2024-07-22 18:34:53.028565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:29.328 [2024-07-22 18:34:53.028596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.328 [2024-07-22 18:34:53.028618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:29.328 [2024-07-22 18:34:53.028649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:60512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.328 [2024-07-22 18:34:53.028670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:29.328 [2024-07-22 18:34:53.028701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:60520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.328 [2024-07-22 18:34:53.028723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:29.328 [2024-07-22 18:34:53.028758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:60528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.328 [2024-07-22 18:34:53.028780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:29.328 [2024-07-22 18:34:53.028819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:60984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.328 [2024-07-22 18:34:53.028843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:29.328 [2024-07-22 18:34:53.028875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:60992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.328 [2024-07-22 18:34:53.028897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:29.328 [2024-07-22 18:34:53.028929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:61000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.329 [2024-07-22 18:34:53.028951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:29.329 [2024-07-22 18:34:53.028992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:61008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.329 [2024-07-22 18:34:53.029015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:29.329 [2024-07-22 18:34:53.029047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:61016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.329 [2024-07-22 18:34:53.029068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:29.329 [2024-07-22 18:34:53.029100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:61024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.329 [2024-07-22 18:34:53.029121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:29.329 [2024-07-22 18:34:53.029152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:61032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.329 [2024-07-22 18:34:53.029174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:29.329 [2024-07-22 18:34:53.029220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:61040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.329 [2024-07-22 18:34:53.029251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:29.329 [2024-07-22 18:34:53.029283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:61048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.329 [2024-07-22 18:34:53.029305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:29.329 [2024-07-22 18:34:53.029337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:61056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.329 [2024-07-22 18:34:53.029358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:29.329 [2024-07-22 18:34:53.029390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.329 [2024-07-22 18:34:53.029411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:29.329 [2024-07-22 18:34:53.029442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:61072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.329 [2024-07-22 18:34:53.029464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:29.329 [2024-07-22 18:34:53.029496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:61080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.329 [2024-07-22 18:34:53.029518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:29.329 [2024-07-22 18:34:53.029549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:61088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.329 [2024-07-22 18:34:53.029591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:29.329 [2024-07-22 18:34:53.029624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:61096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.329 [2024-07-22 18:34:53.029648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:29.329 [2024-07-22 18:34:53.029689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:61104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.329 [2024-07-22 18:34:53.029713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:29.329 [2024-07-22 18:34:53.029744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:60536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.329 [2024-07-22 18:34:53.029774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:29.329 [2024-07-22 18:34:53.029806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:60544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.329 [2024-07-22 18:34:53.029828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:29.329 [2024-07-22 18:34:53.029859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:60552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.329 [2024-07-22 18:34:53.029894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.329 [2024-07-22 18:34:53.029929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:60560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.329 [2024-07-22 18:34:53.029952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.329 [2024-07-22 18:34:53.029984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:60568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.329 [2024-07-22 18:34:53.030005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:29.329 [2024-07-22 18:34:53.030037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:60576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.329 [2024-07-22 18:34:53.030058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:29.329 [2024-07-22 18:34:53.030089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:60584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.329 [2024-07-22 18:34:53.030111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:29.329 [2024-07-22 18:34:53.030154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:60592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.329 [2024-07-22 18:34:53.030176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:29.329 [2024-07-22 18:34:53.030223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:60600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.329 [2024-07-22 18:34:53.030247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:29.329 [2024-07-22 18:34:53.030279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:60608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.329 [2024-07-22 18:34:53.030310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:29.329 [2024-07-22 18:34:53.030342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:60616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.329 [2024-07-22 18:34:53.030363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:29.329 [2024-07-22 18:34:53.030394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:60624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.329 [2024-07-22 18:34:53.030425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:29.329 [2024-07-22 18:34:53.030458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:60632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.329 [2024-07-22 18:34:53.030480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:29.329 [2024-07-22 18:34:53.030511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:60640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.329 [2024-07-22 18:34:53.030532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:29.329 [2024-07-22 18:34:53.030563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:60648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.329 [2024-07-22 18:34:53.030585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:29.329 [2024-07-22 18:34:53.030616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:60656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.329 [2024-07-22 18:34:53.030638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:29.329 [2024-07-22 18:34:53.030670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:61112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.329 [2024-07-22 18:34:53.030692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:29.329 [2024-07-22 18:34:53.030723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:61120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.329 [2024-07-22 18:34:53.030744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:29.329 [2024-07-22 18:34:53.030786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:61128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.329 [2024-07-22 18:34:53.030810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:29.329 [2024-07-22 18:34:53.030841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:61136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.329 [2024-07-22 18:34:53.030864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:29.329 [2024-07-22 18:34:53.030895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:61144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.329 [2024-07-22 18:34:53.030917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:29.329 [2024-07-22 18:34:53.030948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.329 [2024-07-22 18:34:53.030969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:29.329 [2024-07-22 18:34:53.031000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.329 [2024-07-22 18:34:53.031021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:29.329 [2024-07-22 18:34:53.031053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:61168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.329 [2024-07-22 18:34:53.031083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:29.329 [2024-07-22 18:34:53.031115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:60664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.329 [2024-07-22 18:34:53.031137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:29.330 [2024-07-22 18:34:53.031168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:60672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.330 [2024-07-22 18:34:53.031190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:29.330 [2024-07-22 18:34:53.031233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:60680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.330 [2024-07-22 18:34:53.031257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:29.330 [2024-07-22 18:34:53.031288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:60688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.330 [2024-07-22 18:34:53.031310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:29.330 [2024-07-22 18:34:53.031342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:60696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.330 [2024-07-22 18:34:53.031364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:29.330 [2024-07-22 18:34:53.031395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:60704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.330 [2024-07-22 18:34:53.031416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:29.330 [2024-07-22 18:34:53.031448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:60712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.330 [2024-07-22 18:34:53.031469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:29.330 [2024-07-22 18:34:53.031500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:60720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.330 [2024-07-22 18:34:53.031522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:29.330 [2024-07-22 18:34:53.031578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:61176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.330 [2024-07-22 18:34:53.031605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:29.330 [2024-07-22 18:34:53.031637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:61184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.330 [2024-07-22 18:34:53.031659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:29.330 [2024-07-22 18:34:53.031691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:61192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.330 [2024-07-22 18:34:53.031712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:29.330 [2024-07-22 18:34:53.031744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:61200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.330 [2024-07-22 18:34:53.031774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.330 [2024-07-22 18:34:53.031807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:61208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.330 [2024-07-22 18:34:53.031829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:29.330 [2024-07-22 18:34:53.031860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:61216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.330 [2024-07-22 18:34:53.031882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:29.330 [2024-07-22 18:34:53.031913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:61224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.330 [2024-07-22 18:34:53.031935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:29.330 [2024-07-22 18:34:53.031966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:61232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.330 [2024-07-22 18:34:53.031988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:29.330 [2024-07-22 18:34:53.032020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:61240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.330 [2024-07-22 18:34:53.032048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:29.330 [2024-07-22 18:34:53.032079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:61248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.330 [2024-07-22 18:34:53.032101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:29.330 [2024-07-22 18:34:53.032132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:61256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.330 [2024-07-22 18:34:53.032153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:29.330 [2024-07-22 18:34:53.032185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:61264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.330 [2024-07-22 18:34:53.032220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:29.330 [2024-07-22 18:34:53.032255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:61272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.330 [2024-07-22 18:34:53.032277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:29.330 [2024-07-22 18:34:53.032309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:61280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.330 [2024-07-22 18:34:53.032331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:29.330 [2024-07-22 18:34:53.032362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:61288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.330 [2024-07-22 18:34:53.032385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:29.330 [2024-07-22 18:34:53.032416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:61296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.330 [2024-07-22 18:34:53.032437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:29.330 [2024-07-22 18:34:53.032477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:60728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.330 [2024-07-22 18:34:53.032500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:29.330 [2024-07-22 18:34:53.032532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:60736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.330 [2024-07-22 18:34:53.032554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:29.330 [2024-07-22 18:34:53.032585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:60744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.330 [2024-07-22 18:34:53.032607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:29.330 [2024-07-22 18:34:53.032639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:60752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.330 [2024-07-22 18:34:53.032661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:29.330 [2024-07-22 18:34:53.032692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:60760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.330 [2024-07-22 18:34:53.032714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:29.330 [2024-07-22 18:34:53.032745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:60768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.330 [2024-07-22 18:34:53.032767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:29.330 [2024-07-22 18:34:53.032799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:60776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.330 [2024-07-22 18:34:53.032821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:29.330 [2024-07-22 18:34:53.032853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:60784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.330 [2024-07-22 18:34:53.032875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:29.330 [2024-07-22 18:34:53.032906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:60792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.330 [2024-07-22 18:34:53.032927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:29.330 [2024-07-22 18:34:53.032958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:60800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.330 [2024-07-22 18:34:53.032980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:29.330 [2024-07-22 18:34:53.033012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:60808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.330 [2024-07-22 18:34:53.033034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:29.330 [2024-07-22 18:34:53.033065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:60816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.330 [2024-07-22 18:34:53.033087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:29.330 [2024-07-22 18:34:53.033125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:60824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.330 [2024-07-22 18:34:53.033148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:29.330 [2024-07-22 18:34:53.033179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:60832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.330 [2024-07-22 18:34:53.033231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:29.330 [2024-07-22 18:34:53.033268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:60840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.330 [2024-07-22 18:34:53.033291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:29.331 [2024-07-22 18:34:53.035351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:60848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.331 [2024-07-22 18:34:53.035394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:29.331 [2024-07-22 18:34:53.035442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:61304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.331 [2024-07-22 18:34:53.035467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:29.331 [2024-07-22 18:34:53.035500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:61312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.331 [2024-07-22 18:34:53.035522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:29.331 [2024-07-22 18:34:53.035554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:61320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.331 [2024-07-22 18:34:53.035576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:29.331 [2024-07-22 18:34:53.035607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:61328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.331 [2024-07-22 18:34:53.035629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.331 [2024-07-22 18:34:53.035662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:61336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.331 [2024-07-22 18:34:53.035684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:29.331 [2024-07-22 18:34:53.035714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:61344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.331 [2024-07-22 18:34:53.035736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:29.331 [2024-07-22 18:34:53.035768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:61352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.331 [2024-07-22 18:34:53.035790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:29.331 [2024-07-22 18:34:53.036008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.331 [2024-07-22 18:34:53.036041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:29.331 [2024-07-22 18:34:53.036079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:61368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.331 [2024-07-22 18:34:53.036116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:29.331 [2024-07-22 18:34:53.036151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:61376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.331 [2024-07-22 18:34:53.036173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:29.331 [2024-07-22 18:34:53.036220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:61384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.331 [2024-07-22 18:34:53.036246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:29.331 [2024-07-22 18:34:53.036279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:61392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.331 [2024-07-22 18:34:53.036314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:29.331 [2024-07-22 18:34:53.036346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:61400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.331 [2024-07-22 18:34:53.036368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:29.331 [2024-07-22 18:34:53.036399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:61408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.331 [2024-07-22 18:34:53.036421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:29.331 [2024-07-22 18:34:53.036452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:61416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.331 [2024-07-22 18:34:53.036485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:29.331 [2024-07-22 18:34:53.036518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:61424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.331 [2024-07-22 18:34:53.036541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:29.331 [2024-07-22 18:34:59.596202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:33600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.331 [2024-07-22 18:34:59.596291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:29.331 [2024-07-22 18:34:59.596388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:33608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.331 [2024-07-22 18:34:59.596418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:29.331 [2024-07-22 18:34:59.596453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:33616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.331 [2024-07-22 18:34:59.596476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:29.331 [2024-07-22 18:34:59.596506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:33624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.331 [2024-07-22 18:34:59.596528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:29.331 [2024-07-22 18:34:59.596557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:33632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.331 [2024-07-22 18:34:59.596617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:29.331 [2024-07-22 18:34:59.596649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:33640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.331 [2024-07-22 18:34:59.596671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:29.331 [2024-07-22 18:34:59.596699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:33648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.331 [2024-07-22 18:34:59.596719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:29.331 [2024-07-22 18:34:59.596746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:33656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.331 [2024-07-22 18:34:59.596767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:29.331 [2024-07-22 18:34:59.596795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:33088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.331 [2024-07-22 18:34:59.596816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:29.331 [2024-07-22 18:34:59.596844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:33096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.331 [2024-07-22 18:34:59.596865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:29.331 [2024-07-22 18:34:59.596893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:33104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.331 [2024-07-22 18:34:59.596914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:29.331 [2024-07-22 18:34:59.596942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:33112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.331 [2024-07-22 18:34:59.596963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:29.331 [2024-07-22 18:34:59.596990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:33120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.331 [2024-07-22 18:34:59.597011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:29.331 [2024-07-22 18:34:59.597039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:33128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.331 [2024-07-22 18:34:59.597060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:29.331 [2024-07-22 18:34:59.597088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:33136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.331 [2024-07-22 18:34:59.597109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:29.331 [2024-07-22 18:34:59.597138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:33144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.331 [2024-07-22 18:34:59.597158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:29.331 [2024-07-22 18:34:59.597202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:33152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.331 [2024-07-22 18:34:59.597224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:29.331 [2024-07-22 18:34:59.597282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.331 [2024-07-22 18:34:59.597307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:29.331 [2024-07-22 18:34:59.597338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:33168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.331 [2024-07-22 18:34:59.597359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:29.331 [2024-07-22 18:34:59.597388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:33176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.331 [2024-07-22 18:34:59.597409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:29.331 [2024-07-22 18:34:59.597438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.331 [2024-07-22 18:34:59.597460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:29.331 [2024-07-22 18:34:59.597490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:33192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.332 [2024-07-22 18:34:59.597511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:29.332 [2024-07-22 18:34:59.597540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:33200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.332 [2024-07-22 18:34:59.597561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:29.332 [2024-07-22 18:34:59.597590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:33208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.332 [2024-07-22 18:34:59.597612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:29.332 [2024-07-22 18:34:59.597648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:33664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.332 [2024-07-22 18:34:59.597672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:29.332 [2024-07-22 18:34:59.597701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:33672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.332 [2024-07-22 18:34:59.597737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:29.332 [2024-07-22 18:34:59.597766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:33680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.332 [2024-07-22 18:34:59.597787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:29.332 [2024-07-22 18:34:59.597815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.332 [2024-07-22 18:34:59.597836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.332 [2024-07-22 18:34:59.597864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:33696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.332 [2024-07-22 18:34:59.597895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:29.332 [2024-07-22 18:34:59.597967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:33704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.332 [2024-07-22 18:34:59.597990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:29.332 [2024-07-22 18:34:59.598019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:33712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.332 [2024-07-22 18:34:59.598042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:29.332 [2024-07-22 18:34:59.598072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:33720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.332 [2024-07-22 18:34:59.598093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:29.332 [2024-07-22 18:34:59.598122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:33728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.332 [2024-07-22 18:34:59.598143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:29.332 [2024-07-22 18:34:59.598172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:33736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.332 [2024-07-22 18:34:59.598194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:29.332 [2024-07-22 18:34:59.598237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:33744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.332 [2024-07-22 18:34:59.598261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:29.332 [2024-07-22 18:34:59.598292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:33752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.332 [2024-07-22 18:34:59.598314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:29.332 [2024-07-22 18:34:59.598350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:33760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.332 [2024-07-22 18:34:59.598371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:29.332 [2024-07-22 18:34:59.598400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:33768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.332 [2024-07-22 18:34:59.598421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:29.332 [2024-07-22 18:34:59.598450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.332 [2024-07-22 18:34:59.598472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:29.332 [2024-07-22 18:34:59.598500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.332 [2024-07-22 18:34:59.598522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:29.332 [2024-07-22 18:34:59.598551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:33216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.332 [2024-07-22 18:34:59.598572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:29.332 [2024-07-22 18:34:59.598600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.332 [2024-07-22 18:34:59.598630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:29.332 [2024-07-22 18:34:59.598661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:33232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.332 [2024-07-22 18:34:59.598683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:29.332 [2024-07-22 18:34:59.598712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:33240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.332 [2024-07-22 18:34:59.598733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:29.332 [2024-07-22 18:34:59.598762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.332 [2024-07-22 18:34:59.598784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:29.332 [2024-07-22 18:34:59.598813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:33256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.332 [2024-07-22 18:34:59.598853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:29.332 [2024-07-22 18:34:59.598884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:33264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.332 [2024-07-22 18:34:59.598915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:29.332 [2024-07-22 18:34:59.598945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:33272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.332 [2024-07-22 18:34:59.598981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:29.332 [2024-07-22 18:34:59.599009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:33280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.332 [2024-07-22 18:34:59.599030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:29.332 [2024-07-22 18:34:59.599058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:33288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.332 [2024-07-22 18:34:59.599078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:29.332 [2024-07-22 18:34:59.599106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:33296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.332 [2024-07-22 18:34:59.599126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:29.332 [2024-07-22 18:34:59.599154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:33304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.332 [2024-07-22 18:34:59.599175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:29.332 [2024-07-22 18:34:59.599203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:33312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.332 [2024-07-22 18:34:59.599254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:29.332 [2024-07-22 18:34:59.599286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.332 [2024-07-22 18:34:59.599316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:29.333 [2024-07-22 18:34:59.599347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:33328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.333 [2024-07-22 18:34:59.599369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:29.333 [2024-07-22 18:34:59.599399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:33336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.333 [2024-07-22 18:34:59.599420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:29.333 [2024-07-22 18:34:59.599454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:33792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.333 [2024-07-22 18:34:59.599477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:29.333 [2024-07-22 18:34:59.599507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:33800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.333 [2024-07-22 18:34:59.599528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:29.333 [2024-07-22 18:34:59.599556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:33808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.333 [2024-07-22 18:34:59.599577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.333 [2024-07-22 18:34:59.599606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:33816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.333 [2024-07-22 18:34:59.599628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.333 [2024-07-22 18:34:59.599656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:33824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.333 [2024-07-22 18:34:59.599677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:29.333 [2024-07-22 18:34:59.599706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.333 [2024-07-22 18:34:59.599727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:29.333 [2024-07-22 18:34:59.599756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:33840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.333 [2024-07-22 18:34:59.599777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:29.333 [2024-07-22 18:34:59.599812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:33848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.333 [2024-07-22 18:34:59.599834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:29.333 [2024-07-22 18:34:59.599863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:33856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.333 [2024-07-22 18:34:59.599884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:29.333 [2024-07-22 18:34:59.599913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:33864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.333 [2024-07-22 18:34:59.599934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:29.333 [2024-07-22 18:34:59.599971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.333 [2024-07-22 18:34:59.599993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:29.333 [2024-07-22 18:34:59.600022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:33880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.333 [2024-07-22 18:34:59.600059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:29.333 [2024-07-22 18:34:59.600088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:33888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.333 [2024-07-22 18:34:59.600108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:29.333 [2024-07-22 18:34:59.600137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:33896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.333 [2024-07-22 18:34:59.600157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:29.333 [2024-07-22 18:34:59.600186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:33904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.333 [2024-07-22 18:34:59.600207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:29.333 [2024-07-22 18:34:59.600250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:33912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.333 [2024-07-22 18:34:59.600273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:29.333 [2024-07-22 18:34:59.600302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:33344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.333 [2024-07-22 18:34:59.600339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:29.333 [2024-07-22 18:34:59.600368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.333 [2024-07-22 18:34:59.600389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:29.333 [2024-07-22 18:34:59.600426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:33360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.333 [2024-07-22 18:34:59.600447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:29.333 [2024-07-22 18:34:59.600476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:33368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.333 [2024-07-22 18:34:59.600498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:29.333 [2024-07-22 18:34:59.600527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:33376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.333 [2024-07-22 18:34:59.600548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:29.333 [2024-07-22 18:34:59.600576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:33384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.333 [2024-07-22 18:34:59.600598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:29.333 [2024-07-22 18:34:59.600635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:33392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.333 [2024-07-22 18:34:59.600658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:29.333 [2024-07-22 18:34:59.600687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:33400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.333 [2024-07-22 18:34:59.600708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:29.333 [2024-07-22 18:34:59.600737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:33408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.333 [2024-07-22 18:34:59.600759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:29.333 [2024-07-22 18:34:59.600789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:33416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.333 [2024-07-22 18:34:59.600818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:29.333 [2024-07-22 18:34:59.600848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:33424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.333 [2024-07-22 18:34:59.600869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:29.333 [2024-07-22 18:34:59.600899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:33432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.333 [2024-07-22 18:34:59.600920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:29.333 [2024-07-22 18:34:59.600959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:33440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.333 [2024-07-22 18:34:59.600980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:29.333 [2024-07-22 18:34:59.601008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.333 [2024-07-22 18:34:59.601030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:29.333 [2024-07-22 18:34:59.601059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:33456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.333 [2024-07-22 18:34:59.601080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:29.333 [2024-07-22 18:34:59.601108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:33464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.333 [2024-07-22 18:34:59.601130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:29.333 [2024-07-22 18:34:59.601166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.333 [2024-07-22 18:34:59.601189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:29.333 [2024-07-22 18:34:59.601259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:33928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.333 [2024-07-22 18:34:59.601295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:29.333 [2024-07-22 18:34:59.601327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:33936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.333 [2024-07-22 18:34:59.601358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:29.333 [2024-07-22 18:34:59.601388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:33944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.333 [2024-07-22 18:34:59.601410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.333 [2024-07-22 18:34:59.601439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:33952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.333 [2024-07-22 18:34:59.601460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:29.334 [2024-07-22 18:34:59.601510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:33960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.334 [2024-07-22 18:34:59.601533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:29.334 [2024-07-22 18:34:59.601562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:33968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.334 [2024-07-22 18:34:59.601584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:29.334 [2024-07-22 18:34:59.601613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:33976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.334 [2024-07-22 18:34:59.601635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:29.334 [2024-07-22 18:34:59.601664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:33984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.334 [2024-07-22 18:34:59.601687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:29.334 [2024-07-22 18:34:59.601716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:33992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.334 [2024-07-22 18:34:59.602092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:29.334 [2024-07-22 18:34:59.602138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:34000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.334 [2024-07-22 18:34:59.602162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:29.334 [2024-07-22 18:34:59.602193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.334 [2024-07-22 18:34:59.602232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:29.334 [2024-07-22 18:34:59.602264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:34016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.334 [2024-07-22 18:34:59.602286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:29.334 [2024-07-22 18:34:59.602315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:34024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.334 [2024-07-22 18:34:59.602337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:29.334 [2024-07-22 18:34:59.602365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:34032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.334 [2024-07-22 18:34:59.602397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:29.334 [2024-07-22 18:34:59.602427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:34040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.334 [2024-07-22 18:34:59.602449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:29.334 [2024-07-22 18:34:59.602478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:33472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.334 [2024-07-22 18:34:59.602500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:29.334 [2024-07-22 18:34:59.602530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:33480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.334 [2024-07-22 18:34:59.602551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:29.334 [2024-07-22 18:34:59.602580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:33488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.334 [2024-07-22 18:34:59.602601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:29.334 [2024-07-22 18:34:59.602629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:33496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.334 [2024-07-22 18:34:59.602652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:29.334 [2024-07-22 18:34:59.602681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:33504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.334 [2024-07-22 18:34:59.602702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:29.334 [2024-07-22 18:34:59.602730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:33512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.334 [2024-07-22 18:34:59.602767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:29.334 [2024-07-22 18:34:59.602797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:33520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.334 [2024-07-22 18:34:59.602819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:29.334 [2024-07-22 18:34:59.602847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:33528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.334 [2024-07-22 18:34:59.602868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:29.334 [2024-07-22 18:34:59.602898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:34048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.334 [2024-07-22 18:34:59.602920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:29.334 [2024-07-22 18:34:59.602951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:34056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.334 [2024-07-22 18:34:59.602972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:29.334 [2024-07-22 18:34:59.603001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.334 [2024-07-22 18:34:59.603022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:29.334 [2024-07-22 18:34:59.603061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:34072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.334 [2024-07-22 18:34:59.603085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:29.334 [2024-07-22 18:34:59.603114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.334 [2024-07-22 18:34:59.603135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:29.334 [2024-07-22 18:34:59.603164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:34088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.334 [2024-07-22 18:34:59.603185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:29.334 [2024-07-22 18:34:59.603229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.334 [2024-07-22 18:34:59.603253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:29.334 [2024-07-22 18:34:59.603283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:34104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.334 [2024-07-22 18:34:59.603305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:29.334 [2024-07-22 18:34:59.603334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:33536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.334 [2024-07-22 18:34:59.603356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:29.334 [2024-07-22 18:34:59.603385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:33544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.334 [2024-07-22 18:34:59.603406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:29.334 [2024-07-22 18:34:59.603434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:33552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.334 [2024-07-22 18:34:59.603456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:29.334 [2024-07-22 18:34:59.603485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:33560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.334 [2024-07-22 18:34:59.603505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.334 [2024-07-22 18:34:59.603535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:33568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.334 [2024-07-22 18:34:59.603555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:29.334 [2024-07-22 18:34:59.603584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:33576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.334 [2024-07-22 18:34:59.603605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:29.334 [2024-07-22 18:34:59.603635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:33584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.334 [2024-07-22 18:34:59.603657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:29.334 [2024-07-22 18:34:59.604249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:33592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.334 [2024-07-22 18:34:59.604285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:29.334 [2024-07-22 18:35:06.644318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.334 [2024-07-22 18:35:06.644404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:29.334 [2024-07-22 18:35:06.644508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.334 [2024-07-22 18:35:06.644547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:29.334 [2024-07-22 18:35:06.644583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:97928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.334 [2024-07-22 18:35:06.644606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:29.334 [2024-07-22 18:35:06.644636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:97936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.335 [2024-07-22 18:35:06.644658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:29.335 [2024-07-22 18:35:06.644687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:97944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.335 [2024-07-22 18:35:06.644709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:29.335 [2024-07-22 18:35:06.644739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:97952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.335 [2024-07-22 18:35:06.644761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:29.335 [2024-07-22 18:35:06.644804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:97960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.335 [2024-07-22 18:35:06.644825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:29.335 [2024-07-22 18:35:06.644853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:97968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.335 [2024-07-22 18:35:06.644874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:29.335 [2024-07-22 18:35:06.644903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:97976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.335 [2024-07-22 18:35:06.644924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:29.335 [2024-07-22 18:35:06.644953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:97984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.335 [2024-07-22 18:35:06.644974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:29.335 [2024-07-22 18:35:06.645002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.335 [2024-07-22 18:35:06.645023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:29.335 [2024-07-22 18:35:06.645073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.335 [2024-07-22 18:35:06.645098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:29.335 [2024-07-22 18:35:06.645126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:98008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.335 [2024-07-22 18:35:06.645158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:29.335 [2024-07-22 18:35:06.645187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.335 [2024-07-22 18:35:06.645208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:29.335 [2024-07-22 18:35:06.645255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.335 [2024-07-22 18:35:06.645278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:29.335 [2024-07-22 18:35:06.645307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.335 [2024-07-22 18:35:06.645345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:29.335 [2024-07-22 18:35:06.645375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:97464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.335 [2024-07-22 18:35:06.645396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:29.335 [2024-07-22 18:35:06.645425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:97472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.335 [2024-07-22 18:35:06.645447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:29.335 [2024-07-22 18:35:06.645475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:97480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.335 [2024-07-22 18:35:06.645496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:29.335 [2024-07-22 18:35:06.645525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:97488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.335 [2024-07-22 18:35:06.645547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:29.335 [2024-07-22 18:35:06.645575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:97496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.335 [2024-07-22 18:35:06.645597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:29.335 [2024-07-22 18:35:06.645625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:97504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.335 [2024-07-22 18:35:06.645646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:29.335 [2024-07-22 18:35:06.645675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:97512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.335 [2024-07-22 18:35:06.645696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:29.335 [2024-07-22 18:35:06.645725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:97520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.335 [2024-07-22 18:35:06.645757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:29.335 [2024-07-22 18:35:06.645788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:97528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.335 [2024-07-22 18:35:06.645810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:29.335 [2024-07-22 18:35:06.645838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:97536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.335 [2024-07-22 18:35:06.645859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:29.335 [2024-07-22 18:35:06.645911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:97544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.335 [2024-07-22 18:35:06.645936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:29.335 [2024-07-22 18:35:06.645965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:97552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.335 [2024-07-22 18:35:06.645987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:29.335 [2024-07-22 18:35:06.646018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:97560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.335 [2024-07-22 18:35:06.646040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:29.335 [2024-07-22 18:35:06.646069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:97568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.335 [2024-07-22 18:35:06.646091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.335 [2024-07-22 18:35:06.646121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:97576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.335 [2024-07-22 18:35:06.646143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:29.335 [2024-07-22 18:35:06.646173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:97584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.335 [2024-07-22 18:35:06.646195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:29.335 [2024-07-22 18:35:06.646279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.335 [2024-07-22 18:35:06.646306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:29.335 [2024-07-22 18:35:06.646353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:98048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.335 [2024-07-22 18:35:06.646375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:29.335 [2024-07-22 18:35:06.646404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:98056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.335 [2024-07-22 18:35:06.646426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:29.335 [2024-07-22 18:35:06.646455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:98064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.335 [2024-07-22 18:35:06.646499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:29.335 [2024-07-22 18:35:06.646532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.335 [2024-07-22 18:35:06.646555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:29.335 [2024-07-22 18:35:06.646585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.335 [2024-07-22 18:35:06.646606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:29.335 [2024-07-22 18:35:06.646636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.335 [2024-07-22 18:35:06.646657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:29.335 [2024-07-22 18:35:06.646687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.335 [2024-07-22 18:35:06.646708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:29.335 [2024-07-22 18:35:06.646738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:97592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.335 [2024-07-22 18:35:06.646759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:29.335 [2024-07-22 18:35:06.646789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:97600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.335 [2024-07-22 18:35:06.646810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:29.336 [2024-07-22 18:35:06.646839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.336 [2024-07-22 18:35:06.646861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:29.336 [2024-07-22 18:35:06.646891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:97616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.336 [2024-07-22 18:35:06.646912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:29.336 [2024-07-22 18:35:06.646941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:97624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.336 [2024-07-22 18:35:06.646962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:29.336 [2024-07-22 18:35:06.646992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:97632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.336 [2024-07-22 18:35:06.647033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:29.336 [2024-07-22 18:35:06.647064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:97640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.336 [2024-07-22 18:35:06.647086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:29.336 [2024-07-22 18:35:06.647116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:97648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.336 [2024-07-22 18:35:06.647138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:29.336 [2024-07-22 18:35:06.647187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:98104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.336 [2024-07-22 18:35:06.647210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:29.336 [2024-07-22 18:35:06.647256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:98112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.336 [2024-07-22 18:35:06.647282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:29.336 [2024-07-22 18:35:06.647312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:98120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.336 [2024-07-22 18:35:06.647334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:29.336 [2024-07-22 18:35:06.647368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.336 [2024-07-22 18:35:06.647390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:29.336 [2024-07-22 18:35:06.647420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:98136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.336 [2024-07-22 18:35:06.647441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:29.336 [2024-07-22 18:35:06.647471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:98144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.336 [2024-07-22 18:35:06.647493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:29.336 [2024-07-22 18:35:06.647528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:98152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.336 [2024-07-22 18:35:06.647549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:29.336 [2024-07-22 18:35:06.647579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:98160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.336 [2024-07-22 18:35:06.647600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:29.336 [2024-07-22 18:35:06.647629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.336 [2024-07-22 18:35:06.647651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:29.336 [2024-07-22 18:35:06.647680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:98176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.336 [2024-07-22 18:35:06.647702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:29.336 [2024-07-22 18:35:06.647731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:98184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.336 [2024-07-22 18:35:06.647752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:29.336 [2024-07-22 18:35:06.647782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.336 [2024-07-22 18:35:06.647803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:29.336 [2024-07-22 18:35:06.647841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.336 [2024-07-22 18:35:06.647864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:29.336 [2024-07-22 18:35:06.647894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:98208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.336 [2024-07-22 18:35:06.647916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.336 [2024-07-22 18:35:06.647945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.336 [2024-07-22 18:35:06.647967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:29.336 [2024-07-22 18:35:06.647996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:98224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.336 [2024-07-22 18:35:06.648019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:29.336 [2024-07-22 18:35:06.648049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.336 [2024-07-22 18:35:06.648070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:29.336 [2024-07-22 18:35:06.648101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:97664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.336 [2024-07-22 18:35:06.648123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:29.336 [2024-07-22 18:35:06.648162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:97672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.336 [2024-07-22 18:35:06.648183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:29.336 [2024-07-22 18:35:06.648230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:97680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.336 [2024-07-22 18:35:06.648256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:29.336 [2024-07-22 18:35:06.648287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:97688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.336 [2024-07-22 18:35:06.648309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:29.336 [2024-07-22 18:35:06.648338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:97696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.336 [2024-07-22 18:35:06.648360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:29.336 [2024-07-22 18:35:06.648390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:97704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.336 [2024-07-22 18:35:06.648411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:29.336 [2024-07-22 18:35:06.648440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:97712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.336 [2024-07-22 18:35:06.648462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:29.336 [2024-07-22 18:35:06.648496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:97720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.336 [2024-07-22 18:35:06.648526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:29.336 [2024-07-22 18:35:06.648557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:97728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.336 [2024-07-22 18:35:06.648580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:29.336 [2024-07-22 18:35:06.648610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:97736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.336 [2024-07-22 18:35:06.648631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:29.336 [2024-07-22 18:35:06.648670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:97744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.336 [2024-07-22 18:35:06.648697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:29.336 [2024-07-22 18:35:06.648726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:97752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.336 [2024-07-22 18:35:06.648748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:29.336 [2024-07-22 18:35:06.648777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:97760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.336 [2024-07-22 18:35:06.648798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:29.336 [2024-07-22 18:35:06.648828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:97768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.337 [2024-07-22 18:35:06.648850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:29.337 [2024-07-22 18:35:06.648880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:97776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.337 [2024-07-22 18:35:06.648902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:29.337 [2024-07-22 18:35:06.648938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.337 [2024-07-22 18:35:06.648961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:29.337 [2024-07-22 18:35:06.648990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:98240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.337 [2024-07-22 18:35:06.649012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:29.337 [2024-07-22 18:35:06.649041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.337 [2024-07-22 18:35:06.649063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:29.337 [2024-07-22 18:35:06.649099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:98256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.337 [2024-07-22 18:35:06.649120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:29.337 [2024-07-22 18:35:06.649160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.337 [2024-07-22 18:35:06.649189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:29.337 [2024-07-22 18:35:06.649234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.337 [2024-07-22 18:35:06.649258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:29.337 [2024-07-22 18:35:06.649287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:98280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.338 [2024-07-22 18:35:06.649309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:29.338 [2024-07-22 18:35:06.649338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.338 [2024-07-22 18:35:06.649359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:29.338 [2024-07-22 18:35:06.649388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.338 [2024-07-22 18:35:06.649409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:29.338 [2024-07-22 18:35:06.649437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.338 [2024-07-22 18:35:06.649459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:29.338 [2024-07-22 18:35:06.649488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.338 [2024-07-22 18:35:06.649509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:29.338 [2024-07-22 18:35:06.649544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:98320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.338 [2024-07-22 18:35:06.649566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:29.338 [2024-07-22 18:35:06.649595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.338 [2024-07-22 18:35:06.649616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.338 [2024-07-22 18:35:06.649645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:98336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.338 [2024-07-22 18:35:06.649668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.338 [2024-07-22 18:35:06.649696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.338 [2024-07-22 18:35:06.649718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:29.338 [2024-07-22 18:35:06.649747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.338 [2024-07-22 18:35:06.649768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:29.338 [2024-07-22 18:35:06.649797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:97784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.338 [2024-07-22 18:35:06.649818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:29.338 [2024-07-22 18:35:06.649856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.338 [2024-07-22 18:35:06.649878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:29.338 [2024-07-22 18:35:06.649926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:97800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.338 [2024-07-22 18:35:06.649949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:29.338 [2024-07-22 18:35:06.649989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:97808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.338 [2024-07-22 18:35:06.650011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:29.338 [2024-07-22 18:35:06.650042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:97816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.338 [2024-07-22 18:35:06.650064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:29.338 [2024-07-22 18:35:06.650093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:97824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.338 [2024-07-22 18:35:06.650115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:29.338 [2024-07-22 18:35:06.650144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:97832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.338 [2024-07-22 18:35:06.650165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:29.338 [2024-07-22 18:35:06.650195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:97840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.338 [2024-07-22 18:35:06.650242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:29.338 [2024-07-22 18:35:06.650273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:97848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.338 [2024-07-22 18:35:06.650296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:29.338 [2024-07-22 18:35:06.650325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:97856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.338 [2024-07-22 18:35:06.650346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:29.338 [2024-07-22 18:35:06.650376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:97864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.338 [2024-07-22 18:35:06.650398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:29.338 [2024-07-22 18:35:06.650427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:97872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.338 [2024-07-22 18:35:06.650448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:29.338 [2024-07-22 18:35:06.650488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:97880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.338 [2024-07-22 18:35:06.650510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:29.338 [2024-07-22 18:35:06.650548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.338 [2024-07-22 18:35:06.650586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:29.338 [2024-07-22 18:35:06.650617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:97896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.338 [2024-07-22 18:35:06.650639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:29.338 [2024-07-22 18:35:06.651486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:97904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.338 [2024-07-22 18:35:06.651535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:29.338 [2024-07-22 18:35:06.651583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.338 [2024-07-22 18:35:06.651606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:29.338 [2024-07-22 18:35:06.651646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.338 [2024-07-22 18:35:06.651669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:29.338 [2024-07-22 18:35:06.651708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.338 [2024-07-22 18:35:06.651730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:29.338 [2024-07-22 18:35:06.651768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.338 [2024-07-22 18:35:06.651790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:29.338 [2024-07-22 18:35:06.651829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.338 [2024-07-22 18:35:06.651851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:29.338 [2024-07-22 18:35:06.651889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.338 [2024-07-22 18:35:06.651911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:29.338 [2024-07-22 18:35:06.651950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.338 [2024-07-22 18:35:06.651973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:29.338 [2024-07-22 18:35:06.652030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.338 [2024-07-22 18:35:06.652056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:29.338 [2024-07-22 18:35:06.652097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.338 [2024-07-22 18:35:06.652119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:29.338 [2024-07-22 18:35:06.652162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.338 [2024-07-22 18:35:06.652197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:29.338 [2024-07-22 18:35:06.652257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.338 [2024-07-22 18:35:06.652281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:29.338 [2024-07-22 18:35:06.652320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.338 [2024-07-22 18:35:06.652343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:29.338 [2024-07-22 18:35:06.652386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.338 [2024-07-22 18:35:06.652407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:29.339 [2024-07-22 18:35:06.652446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.339 [2024-07-22 18:35:06.652468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.339 [2024-07-22 18:35:06.652508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.339 [2024-07-22 18:35:06.652531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:29.339 [2024-07-22 18:35:06.652575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.339 [2024-07-22 18:35:06.652599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:29.339 [2024-07-22 18:35:19.980440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:43696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.339 [2024-07-22 18:35:19.980523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:29.339 [2024-07-22 18:35:19.980613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:43704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.339 [2024-07-22 18:35:19.980643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:29.339 [2024-07-22 18:35:19.980678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:43712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.339 [2024-07-22 18:35:19.980701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:29.339 [2024-07-22 18:35:19.980732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:43720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.339 [2024-07-22 18:35:19.980754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:29.339 [2024-07-22 18:35:19.980784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:43728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.339 [2024-07-22 18:35:19.980806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:29.339 [2024-07-22 18:35:19.980837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:43736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.339 [2024-07-22 18:35:19.980881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:29.339 [2024-07-22 18:35:19.980924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:43744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.339 [2024-07-22 18:35:19.980946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:29.339 [2024-07-22 18:35:19.980975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:43752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.339 [2024-07-22 18:35:19.980997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:29.339 [2024-07-22 18:35:19.981027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:43760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.339 [2024-07-22 18:35:19.981048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:29.339 [2024-07-22 18:35:19.981078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:43768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.339 [2024-07-22 18:35:19.981099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:29.339 [2024-07-22 18:35:19.981128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:43776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.339 [2024-07-22 18:35:19.981155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:29.339 [2024-07-22 18:35:19.981184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:43784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.339 [2024-07-22 18:35:19.981220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:29.339 [2024-07-22 18:35:19.981256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:43792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.339 [2024-07-22 18:35:19.981279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:29.339 [2024-07-22 18:35:19.981309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:43800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.339 [2024-07-22 18:35:19.981331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:29.339 [2024-07-22 18:35:19.981361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:43808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.339 [2024-07-22 18:35:19.981383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:29.339 [2024-07-22 18:35:19.981413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:43816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.339 [2024-07-22 18:35:19.981434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.339 [2024-07-22 18:35:19.981474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:43248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.339 [2024-07-22 18:35:19.981497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:29.339 [2024-07-22 18:35:19.981525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:43256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.339 [2024-07-22 18:35:19.981546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:29.339 [2024-07-22 18:35:19.981590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:43264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.339 [2024-07-22 18:35:19.981614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:29.339 [2024-07-22 18:35:19.981644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:43272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.339 [2024-07-22 18:35:19.981666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:29.339 [2024-07-22 18:35:19.981697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:43280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.339 [2024-07-22 18:35:19.981719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:29.339 [2024-07-22 18:35:19.981749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:43288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.339 [2024-07-22 18:35:19.981770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:29.339 [2024-07-22 18:35:19.981800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:43296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.339 [2024-07-22 18:35:19.981829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:29.339 [2024-07-22 18:35:19.981860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:43304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.339 [2024-07-22 18:35:19.981882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:29.339 [2024-07-22 18:35:19.981938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:43312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.339 [2024-07-22 18:35:19.981961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:29.339 [2024-07-22 18:35:19.981992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:43320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.339 [2024-07-22 18:35:19.982013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:29.339 [2024-07-22 18:35:19.982043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:43328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.339 [2024-07-22 18:35:19.982065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:29.339 [2024-07-22 18:35:19.982095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:43336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.339 [2024-07-22 18:35:19.982118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:29.339 [2024-07-22 18:35:19.982149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:43344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.339 [2024-07-22 18:35:19.982171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:29.339 [2024-07-22 18:35:19.982231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:43352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.339 [2024-07-22 18:35:19.982255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:29.339 [2024-07-22 18:35:19.982296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:43360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.339 [2024-07-22 18:35:19.982319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:29.339 [2024-07-22 18:35:19.982349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:43368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.339 [2024-07-22 18:35:19.982371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:29.339 [2024-07-22 18:35:19.982438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:43824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.339 [2024-07-22 18:35:19.982468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.339 [2024-07-22 18:35:19.982492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:43832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.339 [2024-07-22 18:35:19.982511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.339 [2024-07-22 18:35:19.982532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:43840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.339 [2024-07-22 18:35:19.982550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.339 [2024-07-22 18:35:19.982570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:43848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.339 [2024-07-22 18:35:19.982589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.340 [2024-07-22 18:35:19.982610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:43856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.340 [2024-07-22 18:35:19.982629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.340 [2024-07-22 18:35:19.982649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:43864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.340 [2024-07-22 18:35:19.982667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.340 [2024-07-22 18:35:19.982699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:43872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.340 [2024-07-22 18:35:19.982718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.340 [2024-07-22 18:35:19.982738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:43880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.340 [2024-07-22 18:35:19.982756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.340 [2024-07-22 18:35:19.982776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:43376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.340 [2024-07-22 18:35:19.982794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.340 [2024-07-22 18:35:19.982815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:43384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.340 [2024-07-22 18:35:19.982833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.340 [2024-07-22 18:35:19.982853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:43392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.340 [2024-07-22 18:35:19.982882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.340 [2024-07-22 18:35:19.982904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:43400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.340 [2024-07-22 18:35:19.982922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.340 [2024-07-22 18:35:19.982943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:43408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.340 [2024-07-22 18:35:19.982962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.340 [2024-07-22 18:35:19.982981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:43416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.340 [2024-07-22 18:35:19.983019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.340 [2024-07-22 18:35:19.983041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:43424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.340 [2024-07-22 18:35:19.983060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.340 [2024-07-22 18:35:19.983080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:43432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.340 [2024-07-22 18:35:19.983103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.340 [2024-07-22 18:35:19.983124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:43888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.340 [2024-07-22 18:35:19.983143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.340 [2024-07-22 18:35:19.983163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:43896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.340 [2024-07-22 18:35:19.983182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.340 [2024-07-22 18:35:19.983203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:43904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.340 [2024-07-22 18:35:19.983242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.340 [2024-07-22 18:35:19.983263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:43912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.340 [2024-07-22 18:35:19.983287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.340 [2024-07-22 18:35:19.983308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.340 [2024-07-22 18:35:19.983326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.340 [2024-07-22 18:35:19.983346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.340 [2024-07-22 18:35:19.983365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.340 [2024-07-22 18:35:19.983385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.340 [2024-07-22 18:35:19.983404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.340 [2024-07-22 18:35:19.983433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:43944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.340 [2024-07-22 18:35:19.983487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.340 [2024-07-22 18:35:19.983510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:43952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.340 [2024-07-22 18:35:19.983531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.340 [2024-07-22 18:35:19.983551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:43960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.340 [2024-07-22 18:35:19.983570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.340 [2024-07-22 18:35:19.983590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:43968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.340 [2024-07-22 18:35:19.983608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.340 [2024-07-22 18:35:19.983628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:43976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.340 [2024-07-22 18:35:19.983647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.340 [2024-07-22 18:35:19.983667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:43984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.340 [2024-07-22 18:35:19.983685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.340 [2024-07-22 18:35:19.983705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:43992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.340 [2024-07-22 18:35:19.983723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.340 [2024-07-22 18:35:19.983743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:44000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.340 [2024-07-22 18:35:19.983762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.340 [2024-07-22 18:35:19.983782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:44008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.340 [2024-07-22 18:35:19.983807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.340 [2024-07-22 18:35:19.983828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:43440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.340 [2024-07-22 18:35:19.983847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.340 [2024-07-22 18:35:19.983868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:43448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.340 [2024-07-22 18:35:19.983886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.340 [2024-07-22 18:35:19.983906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:43456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.340 [2024-07-22 18:35:19.983924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.340 [2024-07-22 18:35:19.983944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:43464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.340 [2024-07-22 18:35:19.983963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.340 [2024-07-22 18:35:19.983992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:43472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.340 [2024-07-22 18:35:19.984011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.340 [2024-07-22 18:35:19.984031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:43480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.340 [2024-07-22 18:35:19.984050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.340 [2024-07-22 18:35:19.984070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:43488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.340 [2024-07-22 18:35:19.984089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.340 [2024-07-22 18:35:19.984109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:43496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.340 [2024-07-22 18:35:19.984128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.340 [2024-07-22 18:35:19.984148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:43504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.340 [2024-07-22 18:35:19.984167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.340 [2024-07-22 18:35:19.984187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:43512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.340 [2024-07-22 18:35:19.984223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.340 [2024-07-22 18:35:19.984247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:43520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.340 [2024-07-22 18:35:19.984266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.340 [2024-07-22 18:35:19.984286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:43528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.340 [2024-07-22 18:35:19.984305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.341 [2024-07-22 18:35:19.984325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:43536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.341 [2024-07-22 18:35:19.984346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.341 [2024-07-22 18:35:19.984366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:43544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.341 [2024-07-22 18:35:19.984384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.341 [2024-07-22 18:35:19.984405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:43552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.341 [2024-07-22 18:35:19.984423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.341 [2024-07-22 18:35:19.984443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:43560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.341 [2024-07-22 18:35:19.984461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.341 [2024-07-22 18:35:19.984482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:44016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.341 [2024-07-22 18:35:19.984508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.341 [2024-07-22 18:35:19.984529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:44024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.341 [2024-07-22 18:35:19.984552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.341 [2024-07-22 18:35:19.984573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:44032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.341 [2024-07-22 18:35:19.984591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.341 [2024-07-22 18:35:19.984611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:44040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.341 [2024-07-22 18:35:19.984630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.341 [2024-07-22 18:35:19.984651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:44048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.341 [2024-07-22 18:35:19.984669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.341 [2024-07-22 18:35:19.984690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:44056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.341 [2024-07-22 18:35:19.984708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.341 [2024-07-22 18:35:19.984728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:44064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.341 [2024-07-22 18:35:19.984746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.341 [2024-07-22 18:35:19.984766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:44072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.341 [2024-07-22 18:35:19.984785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.341 [2024-07-22 18:35:19.984805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:44080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.341 [2024-07-22 18:35:19.984823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.341 [2024-07-22 18:35:19.984843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:44088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.341 [2024-07-22 18:35:19.984861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.341 [2024-07-22 18:35:19.984883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.341 [2024-07-22 18:35:19.984901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.341 [2024-07-22 18:35:19.984922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:44104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.341 [2024-07-22 18:35:19.984941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.341 [2024-07-22 18:35:19.984961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:44112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.341 [2024-07-22 18:35:19.984980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.341 [2024-07-22 18:35:19.985007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:44120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.341 [2024-07-22 18:35:19.985026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.341 [2024-07-22 18:35:19.985046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.341 [2024-07-22 18:35:19.985065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.341 [2024-07-22 18:35:19.985085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:29.341 [2024-07-22 18:35:19.985104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.341 [2024-07-22 18:35:19.985124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:43568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.341 [2024-07-22 18:35:19.985143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.341 [2024-07-22 18:35:19.985163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:43576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.341 [2024-07-22 18:35:19.985181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.341 [2024-07-22 18:35:19.985202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:43584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.341 [2024-07-22 18:35:19.985236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.341 [2024-07-22 18:35:19.985257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:43592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.341 [2024-07-22 18:35:19.985275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.341 [2024-07-22 18:35:19.985296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:43600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.341 [2024-07-22 18:35:19.985314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.341 [2024-07-22 18:35:19.985334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:43608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.341 [2024-07-22 18:35:19.985352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.341 [2024-07-22 18:35:19.985372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:43616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.341 [2024-07-22 18:35:19.985390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.341 [2024-07-22 18:35:19.985410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:43624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.341 [2024-07-22 18:35:19.985428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.341 [2024-07-22 18:35:19.985448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:43632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.341 [2024-07-22 18:35:19.985466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.341 [2024-07-22 18:35:19.985486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:43640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.341 [2024-07-22 18:35:19.985511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.341 [2024-07-22 18:35:19.985532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:43648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.341 [2024-07-22 18:35:19.985550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.341 [2024-07-22 18:35:19.985570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:43656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.341 [2024-07-22 18:35:19.985588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.341 [2024-07-22 18:35:19.985608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:43664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.341 [2024-07-22 18:35:19.985633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.341 [2024-07-22 18:35:19.985654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:43672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.341 [2024-07-22 18:35:19.985687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.341 [2024-07-22 18:35:19.985707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:43680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.341 [2024-07-22 18:35:19.985725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.341 [2024-07-22 18:35:19.985744] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b500 is same with the state(5) to be set 00:28:29.341 [2024-07-22 18:35:19.985775] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:29.341 [2024-07-22 18:35:19.985790] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:29.341 [2024-07-22 18:35:19.985808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43688 len:8 PRP1 0x0 PRP2 0x0 00:28:29.341 [2024-07-22 18:35:19.985826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.341 [2024-07-22 18:35:19.985846] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:29.341 [2024-07-22 18:35:19.985870] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:29.341 [2024-07-22 18:35:19.985898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44144 len:8 PRP1 0x0 PRP2 0x0 00:28:29.341 [2024-07-22 18:35:19.985918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.341 [2024-07-22 18:35:19.985936] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:29.341 [2024-07-22 18:35:19.985951] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:29.342 [2024-07-22 18:35:19.985966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44152 len:8 PRP1 0x0 PRP2 0x0 00:28:29.342 [2024-07-22 18:35:19.985984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.342 [2024-07-22 18:35:19.986001] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:29.342 [2024-07-22 18:35:19.986014] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:29.342 [2024-07-22 18:35:19.986029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44160 len:8 PRP1 0x0 PRP2 0x0 00:28:29.342 [2024-07-22 18:35:19.986046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.342 [2024-07-22 18:35:19.986064] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:29.342 [2024-07-22 18:35:19.986086] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:29.342 [2024-07-22 18:35:19.986101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44168 len:8 PRP1 0x0 PRP2 0x0 00:28:29.342 [2024-07-22 18:35:19.986119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.342 [2024-07-22 18:35:19.986148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:29.342 [2024-07-22 18:35:19.986161] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:29.342 [2024-07-22 18:35:19.986176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44176 len:8 PRP1 0x0 PRP2 0x0 00:28:29.342 [2024-07-22 18:35:19.986194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.342 [2024-07-22 18:35:19.986237] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:29.342 [2024-07-22 18:35:19.986252] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:29.342 [2024-07-22 18:35:19.986267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44184 len:8 PRP1 0x0 PRP2 0x0 00:28:29.342 [2024-07-22 18:35:19.986285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.342 [2024-07-22 18:35:19.986302] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:29.342 [2024-07-22 18:35:19.986315] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:29.342 [2024-07-22 18:35:19.986330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44192 len:8 PRP1 0x0 PRP2 0x0 00:28:29.342 [2024-07-22 18:35:19.986347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.342 [2024-07-22 18:35:19.986364] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:29.342 [2024-07-22 18:35:19.986378] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:29.342 [2024-07-22 18:35:19.986392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44200 len:8 PRP1 0x0 PRP2 0x0 00:28:29.342 [2024-07-22 18:35:19.986409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.342 [2024-07-22 18:35:19.986426] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:29.342 [2024-07-22 18:35:19.986446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:29.342 [2024-07-22 18:35:19.986461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44208 len:8 PRP1 0x0 PRP2 0x0 00:28:29.342 [2024-07-22 18:35:19.986479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.342 [2024-07-22 18:35:19.986496] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:29.342 [2024-07-22 18:35:19.986510] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:29.342 [2024-07-22 18:35:19.986525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44216 len:8 PRP1 0x0 PRP2 0x0 00:28:29.342 [2024-07-22 18:35:19.986542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.342 [2024-07-22 18:35:19.986560] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:29.342 [2024-07-22 18:35:19.986574] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:29.342 [2024-07-22 18:35:19.986598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44224 len:8 PRP1 0x0 PRP2 0x0 00:28:29.342 [2024-07-22 18:35:19.986616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.342 [2024-07-22 18:35:19.986640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:29.342 [2024-07-22 18:35:19.986654] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:29.342 [2024-07-22 18:35:19.986669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44232 len:8 PRP1 0x0 PRP2 0x0 00:28:29.342 [2024-07-22 18:35:19.986696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.342 [2024-07-22 18:35:19.986714] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:29.342 [2024-07-22 18:35:19.986727] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:29.342 [2024-07-22 18:35:19.986742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44240 len:8 PRP1 0x0 PRP2 0x0 00:28:29.342 [2024-07-22 18:35:19.986759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.342 [2024-07-22 18:35:19.986776] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:29.342 [2024-07-22 18:35:19.986789] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:29.342 [2024-07-22 18:35:19.986803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44248 len:8 PRP1 0x0 PRP2 0x0 00:28:29.342 [2024-07-22 18:35:19.986820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.342 [2024-07-22 18:35:19.986838] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:29.342 [2024-07-22 18:35:19.986851] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:29.342 [2024-07-22 18:35:19.986865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44256 len:8 PRP1 0x0 PRP2 0x0 00:28:29.342 [2024-07-22 18:35:19.986882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.342 [2024-07-22 18:35:19.986899] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:29.342 [2024-07-22 18:35:19.986913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:29.342 [2024-07-22 18:35:19.986928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44264 len:8 PRP1 0x0 PRP2 0x0 00:28:29.342 [2024-07-22 18:35:19.986945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.342 [2024-07-22 18:35:19.987224] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b500 was disconnected and freed. reset controller. 00:28:29.342 [2024-07-22 18:35:19.987380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:29.342 [2024-07-22 18:35:19.987414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.342 [2024-07-22 18:35:19.987435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:29.342 [2024-07-22 18:35:19.987454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.342 [2024-07-22 18:35:19.987473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:29.342 [2024-07-22 18:35:19.987491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.342 [2024-07-22 18:35:19.987509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:29.342 [2024-07-22 18:35:19.987527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.342 [2024-07-22 18:35:19.987559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.342 [2024-07-22 18:35:19.987579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:29.342 [2024-07-22 18:35:19.987607] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(5) to be set 00:28:29.342 [2024-07-22 18:35:19.989044] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:29.342 [2024-07-22 18:35:19.989099] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:28:29.342 [2024-07-22 18:35:19.989598] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.342 [2024-07-22 18:35:19.989641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ad80 with addr=10.0.0.2, port=4421 00:28:29.342 [2024-07-22 18:35:19.989664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(5) to be set 00:28:29.342 [2024-07-22 18:35:19.989762] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:28:29.342 [2024-07-22 18:35:19.989811] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:29.342 [2024-07-22 18:35:19.989846] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:29.343 [2024-07-22 18:35:19.989866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:29.343 [2024-07-22 18:35:19.989942] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.343 [2024-07-22 18:35:19.989968] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:29.343 [2024-07-22 18:35:30.068636] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:29.343 Received shutdown signal, test time was about 55.333276 seconds 00:28:29.343 00:28:29.343 Latency(us) 00:28:29.343 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:29.343 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:29.343 Verification LBA range: start 0x0 length 0x4000 00:28:29.343 Nvme0n1 : 55.33 5816.60 22.72 0.00 0.00 21977.85 714.94 7046430.72 00:28:29.343 =================================================================================================================== 00:28:29.343 Total : 5816.60 22.72 0.00 0.00 21977.85 714.94 7046430.72 00:28:29.343 18:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:29.600 18:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:28:29.600 18:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:28:29.600 18:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:28:29.600 18:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:29.600 18:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:28:29.600 18:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:29.600 18:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:28:29.600 18:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:29.601 18:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:29.601 rmmod nvme_tcp 00:28:29.601 rmmod nvme_fabrics 00:28:29.601 rmmod nvme_keyring 00:28:29.601 18:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:29.601 18:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:28:29.601 18:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:28:29.601 18:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 88050 ']' 00:28:29.601 18:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 88050 00:28:29.601 18:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 88050 ']' 00:28:29.601 18:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 88050 00:28:29.601 18:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:28:29.858 18:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:29.858 18:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88050 00:28:29.858 18:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:29.858 18:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:29.858 killing process with pid 88050 00:28:29.858 18:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88050' 00:28:29.858 18:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 88050 00:28:29.858 18:35:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 88050 00:28:31.232 18:35:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:31.232 18:35:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:31.232 18:35:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:31.232 18:35:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:31.232 18:35:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:31.232 18:35:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:31.232 18:35:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:31.232 18:35:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:31.232 18:35:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:28:31.232 00:28:31.232 real 1m3.714s 00:28:31.232 user 2m57.324s 00:28:31.232 sys 0m16.411s 00:28:31.232 18:35:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:31.232 18:35:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:28:31.232 ************************************ 00:28:31.232 END TEST nvmf_host_multipath 00:28:31.232 ************************************ 00:28:31.232 18:35:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:28:31.232 18:35:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:28:31.232 18:35:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:31.232 18:35:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:31.232 18:35:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.232 ************************************ 00:28:31.232 START TEST nvmf_timeout 00:28:31.232 ************************************ 00:28:31.232 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:28:31.232 * Looking for test storage... 00:28:31.232 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:28:31.232 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:31.232 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:28:31.232 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:31.232 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:31.232 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:31.232 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:31.232 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:31.232 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:31.232 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:31.232 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:31.232 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:31.232 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:31.232 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:28:31.232 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=1e224894-a0fc-4112-b81b-a37606f50c96 00:28:31.232 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:31.232 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:31.232 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:31.232 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:31.232 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:31.232 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:31.232 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:31.232 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:31.232 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.232 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.232 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.232 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:28:31.232 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.232 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:28:31.232 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:31.232 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:31.232 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:31.232 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:31.232 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:31.232 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:31.232 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:31.232 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:31.232 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:31.232 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:31.232 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:31.232 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:28:31.233 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:31.233 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:28:31.233 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:31.233 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:31.233 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:31.233 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:31.233 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:31.233 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:31.233 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:31.233 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:31.233 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:28:31.233 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:28:31.233 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:28:31.233 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:28:31.233 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:28:31.233 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:28:31.233 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:31.233 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:31.233 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:28:31.490 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:28:31.490 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:31.490 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:31.490 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:31.490 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:31.490 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:31.490 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:31.490 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:31.490 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:31.490 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:28:31.490 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:28:31.490 Cannot find device "nvmf_tgt_br" 00:28:31.490 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:28:31.490 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:28:31.490 Cannot find device "nvmf_tgt_br2" 00:28:31.490 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:28:31.490 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:28:31.490 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:28:31.490 Cannot find device "nvmf_tgt_br" 00:28:31.490 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:28:31.490 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:28:31.490 Cannot find device "nvmf_tgt_br2" 00:28:31.490 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:28:31.490 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:28:31.490 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:28:31.490 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:31.490 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:31.490 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:28:31.490 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:31.490 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:31.490 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:28:31.490 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:28:31.490 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:31.490 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:31.490 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:31.490 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:31.490 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:31.490 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:31.490 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:28:31.490 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:28:31.490 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:28:31.490 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:28:31.490 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:28:31.490 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:28:31.490 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:31.490 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:31.490 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:31.748 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:28:31.748 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:28:31.748 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:28:31.748 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:31.748 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:31.748 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:31.748 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:31.748 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:28:31.748 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:31.748 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:28:31.748 00:28:31.748 --- 10.0.0.2 ping statistics --- 00:28:31.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:31.748 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:28:31.748 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:28:31.748 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:31.748 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:28:31.748 00:28:31.748 --- 10.0.0.3 ping statistics --- 00:28:31.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:31.748 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:28:31.748 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:31.748 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:31.748 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:28:31.748 00:28:31.748 --- 10.0.0.1 ping statistics --- 00:28:31.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:31.748 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:28:31.748 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:31.748 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:28:31.748 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:31.748 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:31.748 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:31.748 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:31.748 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:31.748 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:31.748 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:31.748 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:28:31.748 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:31.748 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:31.748 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:31.748 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=89221 00:28:31.748 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 89221 00:28:31.748 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:28:31.748 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 89221 ']' 00:28:31.748 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:31.748 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:31.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:31.748 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:31.748 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:31.748 18:35:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:31.748 [2024-07-22 18:35:43.756368] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:28:31.748 [2024-07-22 18:35:43.756557] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:32.009 [2024-07-22 18:35:43.936766] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:32.268 [2024-07-22 18:35:44.171192] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:32.268 [2024-07-22 18:35:44.171292] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:32.268 [2024-07-22 18:35:44.171308] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:32.268 [2024-07-22 18:35:44.171321] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:32.268 [2024-07-22 18:35:44.171333] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:32.268 [2024-07-22 18:35:44.171512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:32.268 [2024-07-22 18:35:44.171520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:32.525 [2024-07-22 18:35:44.386979] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:28:32.783 18:35:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:32.783 18:35:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:28:32.783 18:35:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:32.783 18:35:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:32.783 18:35:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:32.783 18:35:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:32.783 18:35:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:32.783 18:35:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:33.041 [2024-07-22 18:35:45.038401] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:33.299 18:35:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:28:33.557 Malloc0 00:28:33.557 18:35:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:33.815 18:35:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:34.073 18:35:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:34.330 [2024-07-22 18:35:46.195580] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:34.330 18:35:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=89270 00:28:34.330 18:35:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:28:34.330 18:35:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 89270 /var/tmp/bdevperf.sock 00:28:34.330 18:35:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 89270 ']' 00:28:34.330 18:35:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:34.330 18:35:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:34.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:34.331 18:35:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:34.331 18:35:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:34.331 18:35:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:34.331 [2024-07-22 18:35:46.318839] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:28:34.331 [2024-07-22 18:35:46.319003] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89270 ] 00:28:34.588 [2024-07-22 18:35:46.498765] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:34.846 [2024-07-22 18:35:46.790190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:35.104 [2024-07-22 18:35:47.010379] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:28:35.362 18:35:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:35.362 18:35:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:28:35.362 18:35:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:28:35.621 18:35:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:28:35.879 NVMe0n1 00:28:35.879 18:35:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=89295 00:28:35.879 18:35:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:28:35.879 18:35:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:36.137 Running I/O for 10 seconds... 00:28:37.075 18:35:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:37.075 [2024-07-22 18:35:49.083448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.075 [2024-07-22 18:35:49.083519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.075 [2024-07-22 18:35:49.083541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.075 [2024-07-22 18:35:49.083559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.075 [2024-07-22 18:35:49.083575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.075 [2024-07-22 18:35:49.083592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.075 [2024-07-22 18:35:49.083607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.075 [2024-07-22 18:35:49.083624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.075 [2024-07-22 18:35:49.083639] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:28:37.075 [2024-07-22 18:35:49.083952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:49312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.075 [2024-07-22 18:35:49.083979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.075 [2024-07-22 18:35:49.084015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:49440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.075 [2024-07-22 18:35:49.084038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.075 [2024-07-22 18:35:49.084076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:49448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.075 [2024-07-22 18:35:49.084101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.075 [2024-07-22 18:35:49.084132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:49456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.075 [2024-07-22 18:35:49.084149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.075 [2024-07-22 18:35:49.084182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:49464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.075 [2024-07-22 18:35:49.084198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.075 [2024-07-22 18:35:49.084237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:49472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.075 [2024-07-22 18:35:49.084254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.075 [2024-07-22 18:35:49.084273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:49480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.075 [2024-07-22 18:35:49.084289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.075 [2024-07-22 18:35:49.084309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:49488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.075 [2024-07-22 18:35:49.084323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.075 [2024-07-22 18:35:49.084343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:49496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.075 [2024-07-22 18:35:49.084367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.075 [2024-07-22 18:35:49.084387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:49504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.075 [2024-07-22 18:35:49.084401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.075 [2024-07-22 18:35:49.084420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:49512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.075 [2024-07-22 18:35:49.084435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.075 [2024-07-22 18:35:49.084454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:49520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.075 [2024-07-22 18:35:49.084469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.075 [2024-07-22 18:35:49.084491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.075 [2024-07-22 18:35:49.084506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.075 [2024-07-22 18:35:49.084525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:49536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.075 [2024-07-22 18:35:49.084542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.075 [2024-07-22 18:35:49.084562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:49544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.075 [2024-07-22 18:35:49.084577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.075 [2024-07-22 18:35:49.084597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:49552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.075 [2024-07-22 18:35:49.084612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.075 [2024-07-22 18:35:49.084633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:49560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.075 [2024-07-22 18:35:49.084648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.075 [2024-07-22 18:35:49.084669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:49568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.075 [2024-07-22 18:35:49.084684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.075 [2024-07-22 18:35:49.084704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:49576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.075 [2024-07-22 18:35:49.084718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.075 [2024-07-22 18:35:49.084747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:49584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.075 [2024-07-22 18:35:49.084764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.075 [2024-07-22 18:35:49.084787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:49592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.075 [2024-07-22 18:35:49.084802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.075 [2024-07-22 18:35:49.084822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:49600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.075 [2024-07-22 18:35:49.084837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.075 [2024-07-22 18:35:49.084857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:49608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.075 [2024-07-22 18:35:49.084872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.076 [2024-07-22 18:35:49.084892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:49616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.076 [2024-07-22 18:35:49.084906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.076 [2024-07-22 18:35:49.084926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:49624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.076 [2024-07-22 18:35:49.084941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.076 [2024-07-22 18:35:49.084960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:49632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.076 [2024-07-22 18:35:49.084974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.076 [2024-07-22 18:35:49.084994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:49640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.076 [2024-07-22 18:35:49.085008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.076 [2024-07-22 18:35:49.085028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:49648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.076 [2024-07-22 18:35:49.085043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.076 [2024-07-22 18:35:49.085065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:49656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.076 [2024-07-22 18:35:49.085080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.076 [2024-07-22 18:35:49.085101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:49664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.076 [2024-07-22 18:35:49.085118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.076 [2024-07-22 18:35:49.085139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:49672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.076 [2024-07-22 18:35:49.085154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.076 [2024-07-22 18:35:49.085174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:49680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.076 [2024-07-22 18:35:49.085188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.076 [2024-07-22 18:35:49.085220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:49688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.076 [2024-07-22 18:35:49.085238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.076 [2024-07-22 18:35:49.085258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:49696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.076 [2024-07-22 18:35:49.085274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.076 [2024-07-22 18:35:49.085293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:49704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.076 [2024-07-22 18:35:49.085307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.076 [2024-07-22 18:35:49.085327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:49712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.076 [2024-07-22 18:35:49.085342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.076 [2024-07-22 18:35:49.085364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:49720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.076 [2024-07-22 18:35:49.085378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.076 [2024-07-22 18:35:49.085398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:49728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.076 [2024-07-22 18:35:49.085412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.076 [2024-07-22 18:35:49.085438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:49736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.076 [2024-07-22 18:35:49.085454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.076 [2024-07-22 18:35:49.085474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:49744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.076 [2024-07-22 18:35:49.085488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.076 [2024-07-22 18:35:49.085508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:49752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.076 [2024-07-22 18:35:49.085522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.076 [2024-07-22 18:35:49.085541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:49760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.076 [2024-07-22 18:35:49.085556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.076 [2024-07-22 18:35:49.085575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:49768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.076 [2024-07-22 18:35:49.085589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.076 [2024-07-22 18:35:49.085611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:49776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.076 [2024-07-22 18:35:49.085625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.076 [2024-07-22 18:35:49.085673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:49784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.076 [2024-07-22 18:35:49.085688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.076 [2024-07-22 18:35:49.085708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:49792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.076 [2024-07-22 18:35:49.085734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.076 [2024-07-22 18:35:49.085755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:49800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.076 [2024-07-22 18:35:49.085770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.076 [2024-07-22 18:35:49.085789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:49808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.076 [2024-07-22 18:35:49.085804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.076 [2024-07-22 18:35:49.085823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:49816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.076 [2024-07-22 18:35:49.085838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.076 [2024-07-22 18:35:49.085870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:49824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.076 [2024-07-22 18:35:49.085885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.076 [2024-07-22 18:35:49.085919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:49832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.076 [2024-07-22 18:35:49.085934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.076 [2024-07-22 18:35:49.085954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:49840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.076 [2024-07-22 18:35:49.085969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.076 [2024-07-22 18:35:49.085991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:49848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.076 [2024-07-22 18:35:49.086005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.076 [2024-07-22 18:35:49.086024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:49856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.076 [2024-07-22 18:35:49.086039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.076 [2024-07-22 18:35:49.086059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:49864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.076 [2024-07-22 18:35:49.086073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.076 [2024-07-22 18:35:49.086095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:49872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.076 [2024-07-22 18:35:49.086109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.076 [2024-07-22 18:35:49.086129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:49880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.076 [2024-07-22 18:35:49.086144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.076 [2024-07-22 18:35:49.086164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:49888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.076 [2024-07-22 18:35:49.086179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.076 [2024-07-22 18:35:49.086198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:49896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.076 [2024-07-22 18:35:49.086226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.076 [2024-07-22 18:35:49.086248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:49904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.076 [2024-07-22 18:35:49.086263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.076 [2024-07-22 18:35:49.086286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:49912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.076 [2024-07-22 18:35:49.086300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.077 [2024-07-22 18:35:49.086321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:49920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.077 [2024-07-22 18:35:49.086336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.077 [2024-07-22 18:35:49.086356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:49928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.077 [2024-07-22 18:35:49.086372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.077 [2024-07-22 18:35:49.086391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:49936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.077 [2024-07-22 18:35:49.086406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.077 [2024-07-22 18:35:49.086426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:49944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.077 [2024-07-22 18:35:49.086440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.077 [2024-07-22 18:35:49.086460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:49952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.077 [2024-07-22 18:35:49.086474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.077 [2024-07-22 18:35:49.086494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:49960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.077 [2024-07-22 18:35:49.086509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.077 [2024-07-22 18:35:49.086528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:49968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.077 [2024-07-22 18:35:49.086542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.077 [2024-07-22 18:35:49.086567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:49976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.077 [2024-07-22 18:35:49.086582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.077 [2024-07-22 18:35:49.086602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:49984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.077 [2024-07-22 18:35:49.086617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.077 [2024-07-22 18:35:49.086637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:49992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.077 [2024-07-22 18:35:49.086658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.077 [2024-07-22 18:35:49.086679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:50000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.077 [2024-07-22 18:35:49.086693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.077 [2024-07-22 18:35:49.086713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:50008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.077 [2024-07-22 18:35:49.086728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.077 [2024-07-22 18:35:49.086747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:50016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.077 [2024-07-22 18:35:49.086762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.077 [2024-07-22 18:35:49.086781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:50024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.077 [2024-07-22 18:35:49.086796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.077 [2024-07-22 18:35:49.086815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:50032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.077 [2024-07-22 18:35:49.086830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.077 [2024-07-22 18:35:49.086852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:50040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.077 [2024-07-22 18:35:49.086866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.077 [2024-07-22 18:35:49.086886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:50048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.077 [2024-07-22 18:35:49.086908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.077 [2024-07-22 18:35:49.086928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:50056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.077 [2024-07-22 18:35:49.086950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.077 [2024-07-22 18:35:49.086970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:50064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.077 [2024-07-22 18:35:49.086985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.077 [2024-07-22 18:35:49.087004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:50072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.077 [2024-07-22 18:35:49.087018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.077 [2024-07-22 18:35:49.087038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:50080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.077 [2024-07-22 18:35:49.087053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.077 [2024-07-22 18:35:49.087074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:50088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.077 [2024-07-22 18:35:49.087089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.077 [2024-07-22 18:35:49.087109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:50096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.077 [2024-07-22 18:35:49.087124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.077 [2024-07-22 18:35:49.087146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:50104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.077 [2024-07-22 18:35:49.087160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.077 [2024-07-22 18:35:49.087180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:50112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.077 [2024-07-22 18:35:49.087195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.077 [2024-07-22 18:35:49.087226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:50120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.077 [2024-07-22 18:35:49.087242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.077 [2024-07-22 18:35:49.087262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:50128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.077 [2024-07-22 18:35:49.087283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.077 [2024-07-22 18:35:49.087303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:50136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.077 [2024-07-22 18:35:49.087318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.077 [2024-07-22 18:35:49.087337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:50144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.077 [2024-07-22 18:35:49.087351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.077 [2024-07-22 18:35:49.087371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:50152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.077 [2024-07-22 18:35:49.087385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.077 [2024-07-22 18:35:49.087405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:50160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.077 [2024-07-22 18:35:49.087419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.077 [2024-07-22 18:35:49.087440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:50168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.077 [2024-07-22 18:35:49.087455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.077 [2024-07-22 18:35:49.087474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:50176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.077 [2024-07-22 18:35:49.087493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.077 [2024-07-22 18:35:49.087513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:50184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.077 [2024-07-22 18:35:49.087532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.077 [2024-07-22 18:35:49.087564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:50192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.077 [2024-07-22 18:35:49.087579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.077 [2024-07-22 18:35:49.087599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:50200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.077 [2024-07-22 18:35:49.087613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.077 [2024-07-22 18:35:49.087633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:50208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.077 [2024-07-22 18:35:49.087647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.078 [2024-07-22 18:35:49.087667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:50216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.078 [2024-07-22 18:35:49.087682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.078 [2024-07-22 18:35:49.087701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:50224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.078 [2024-07-22 18:35:49.087715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.078 [2024-07-22 18:35:49.087737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:50232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.078 [2024-07-22 18:35:49.087752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.078 [2024-07-22 18:35:49.087771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:50240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.078 [2024-07-22 18:35:49.087786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.078 [2024-07-22 18:35:49.087805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:50248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.078 [2024-07-22 18:35:49.087819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.078 [2024-07-22 18:35:49.087841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:50256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.078 [2024-07-22 18:35:49.087855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.078 [2024-07-22 18:35:49.087875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:50264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.078 [2024-07-22 18:35:49.087890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.078 [2024-07-22 18:35:49.087909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:50272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.078 [2024-07-22 18:35:49.087923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.078 [2024-07-22 18:35:49.087943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:50280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.078 [2024-07-22 18:35:49.087957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.078 [2024-07-22 18:35:49.087977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:50288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.078 [2024-07-22 18:35:49.087991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.078 [2024-07-22 18:35:49.088028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:50296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.078 [2024-07-22 18:35:49.088043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.078 [2024-07-22 18:35:49.088063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:50304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.078 [2024-07-22 18:35:49.088082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.078 [2024-07-22 18:35:49.088102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:50312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.078 [2024-07-22 18:35:49.088121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.078 [2024-07-22 18:35:49.088142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:49320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.078 [2024-07-22 18:35:49.088156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.078 [2024-07-22 18:35:49.088175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:49328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.078 [2024-07-22 18:35:49.088189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.078 [2024-07-22 18:35:49.088221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:49336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.078 [2024-07-22 18:35:49.088238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.078 [2024-07-22 18:35:49.088258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:49344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.078 [2024-07-22 18:35:49.088272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.078 [2024-07-22 18:35:49.088292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:49352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.078 [2024-07-22 18:35:49.088306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.078 [2024-07-22 18:35:49.088328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:49360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.078 [2024-07-22 18:35:49.088342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.078 [2024-07-22 18:35:49.088361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:49368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.078 [2024-07-22 18:35:49.088382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.078 [2024-07-22 18:35:49.088401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:49376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.078 [2024-07-22 18:35:49.088415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.078 [2024-07-22 18:35:49.088435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:49384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.078 [2024-07-22 18:35:49.088449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.078 [2024-07-22 18:35:49.088468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:49392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.078 [2024-07-22 18:35:49.088482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.078 [2024-07-22 18:35:49.088505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:49400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.078 [2024-07-22 18:35:49.088519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.078 [2024-07-22 18:35:49.088538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:49408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.078 [2024-07-22 18:35:49.088552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.078 [2024-07-22 18:35:49.088571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:49416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.078 [2024-07-22 18:35:49.088585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.078 [2024-07-22 18:35:49.088606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:49424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.078 [2024-07-22 18:35:49.088620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.078 [2024-07-22 18:35:49.088640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:49432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.078 [2024-07-22 18:35:49.088659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.078 [2024-07-22 18:35:49.088679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:50320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.078 [2024-07-22 18:35:49.088698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.078 [2024-07-22 18:35:49.088717] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(5) to be set 00:28:37.078 [2024-07-22 18:35:49.088736] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:37.078 [2024-07-22 18:35:49.088752] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:37.078 [2024-07-22 18:35:49.088767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50328 len:8 PRP1 0x0 PRP2 0x0 00:28:37.078 [2024-07-22 18:35:49.088784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.337 [2024-07-22 18:35:49.089044] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b000 was disconnected and freed. reset controller. 00:28:37.337 [2024-07-22 18:35:49.089360] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:37.337 [2024-07-22 18:35:49.089401] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:28:37.337 [2024-07-22 18:35:49.089532] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.337 [2024-07-22 18:35:49.089587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:28:37.337 [2024-07-22 18:35:49.089608] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:28:37.337 [2024-07-22 18:35:49.089651] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:28:37.337 [2024-07-22 18:35:49.089679] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:37.337 [2024-07-22 18:35:49.089702] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:37.337 [2024-07-22 18:35:49.089719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:37.337 [2024-07-22 18:35:49.089754] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:37.337 [2024-07-22 18:35:49.089772] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:37.337 18:35:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:28:39.263 [2024-07-22 18:35:51.090076] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.263 [2024-07-22 18:35:51.090168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:28:39.263 [2024-07-22 18:35:51.090194] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:28:39.263 [2024-07-22 18:35:51.090250] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:28:39.263 [2024-07-22 18:35:51.090299] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.263 [2024-07-22 18:35:51.090323] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.263 [2024-07-22 18:35:51.090341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.263 [2024-07-22 18:35:51.090391] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.263 [2024-07-22 18:35:51.090410] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.263 18:35:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:28:39.263 18:35:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:39.263 18:35:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:28:39.522 18:35:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:28:39.522 18:35:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:28:39.522 18:35:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:28:39.522 18:35:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:28:39.779 18:35:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:28:39.779 18:35:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:28:41.151 [2024-07-22 18:35:53.090627] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.151 [2024-07-22 18:35:53.090744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:28:41.151 [2024-07-22 18:35:53.090771] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:28:41.151 [2024-07-22 18:35:53.090818] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:28:41.151 [2024-07-22 18:35:53.090851] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.151 [2024-07-22 18:35:53.090871] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.151 [2024-07-22 18:35:53.090888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.151 [2024-07-22 18:35:53.090937] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.151 [2024-07-22 18:35:53.090957] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.680 [2024-07-22 18:35:55.091067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:43.680 [2024-07-22 18:35:55.091153] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:43.680 [2024-07-22 18:35:55.091188] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:43.680 [2024-07-22 18:35:55.091218] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:28:43.680 [2024-07-22 18:35:55.091273] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.245 00:28:44.245 Latency(us) 00:28:44.245 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:44.245 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:44.245 Verification LBA range: start 0x0 length 0x4000 00:28:44.245 NVMe0n1 : 8.14 756.96 2.96 15.72 0.00 165378.13 4289.63 7015926.69 00:28:44.245 =================================================================================================================== 00:28:44.245 Total : 756.96 2.96 15.72 0.00 165378.13 4289.63 7015926.69 00:28:44.245 0 00:28:44.812 18:35:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:28:44.812 18:35:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:44.812 18:35:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:28:45.069 18:35:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:28:45.069 18:35:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:28:45.069 18:35:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:28:45.069 18:35:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:28:45.328 18:35:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:28:45.328 18:35:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 89295 00:28:45.328 18:35:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 89270 00:28:45.328 18:35:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 89270 ']' 00:28:45.328 18:35:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 89270 00:28:45.328 18:35:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:28:45.328 18:35:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:45.328 18:35:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89270 00:28:45.328 18:35:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:28:45.328 18:35:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:28:45.328 killing process with pid 89270 00:28:45.328 18:35:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89270' 00:28:45.328 18:35:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 89270 00:28:45.328 Received shutdown signal, test time was about 9.244189 seconds 00:28:45.328 00:28:45.328 Latency(us) 00:28:45.328 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:45.328 =================================================================================================================== 00:28:45.328 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:45.328 18:35:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 89270 00:28:46.702 18:35:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:46.702 [2024-07-22 18:35:58.680621] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:46.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:46.702 18:35:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=89418 00:28:46.702 18:35:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:28:46.702 18:35:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 89418 /var/tmp/bdevperf.sock 00:28:46.702 18:35:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 89418 ']' 00:28:46.702 18:35:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:46.702 18:35:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:46.702 18:35:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:46.702 18:35:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:46.702 18:35:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:46.961 [2024-07-22 18:35:58.804954] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:28:46.961 [2024-07-22 18:35:58.805132] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89418 ] 00:28:46.961 [2024-07-22 18:35:58.974291] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.219 [2024-07-22 18:35:59.230957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:47.477 [2024-07-22 18:35:59.443622] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:28:48.043 18:35:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:48.043 18:35:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:28:48.043 18:35:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:28:48.301 18:36:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:28:48.559 NVMe0n1 00:28:48.559 18:36:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=89442 00:28:48.559 18:36:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:48.559 18:36:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:28:48.559 Running I/O for 10 seconds... 00:28:49.493 18:36:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:49.754 [2024-07-22 18:36:01.717873] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.754 [2024-07-22 18:36:01.717983] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.754 [2024-07-22 18:36:01.718004] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.754 [2024-07-22 18:36:01.718019] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.754 [2024-07-22 18:36:01.718035] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.754 [2024-07-22 18:36:01.718047] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.754 [2024-07-22 18:36:01.718061] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.754 [2024-07-22 18:36:01.718074] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.754 [2024-07-22 18:36:01.718088] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.754 [2024-07-22 18:36:01.718100] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.754 [2024-07-22 18:36:01.718117] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.754 [2024-07-22 18:36:01.718129] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.754 [2024-07-22 18:36:01.718143] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.754 [2024-07-22 18:36:01.718155] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.754 [2024-07-22 18:36:01.718169] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.754 [2024-07-22 18:36:01.718180] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.754 [2024-07-22 18:36:01.718194] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.754 [2024-07-22 18:36:01.718218] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.754 [2024-07-22 18:36:01.718234] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.754 [2024-07-22 18:36:01.718247] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.754 [2024-07-22 18:36:01.718271] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.754 [2024-07-22 18:36:01.718282] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.754 [2024-07-22 18:36:01.718296] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.754 [2024-07-22 18:36:01.718315] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.754 [2024-07-22 18:36:01.718329] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.754 [2024-07-22 18:36:01.718341] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.754 [2024-07-22 18:36:01.718357] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.754 [2024-07-22 18:36:01.718369] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.754 [2024-07-22 18:36:01.718383] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.754 [2024-07-22 18:36:01.718395] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.754 [2024-07-22 18:36:01.718410] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.754 [2024-07-22 18:36:01.718422] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.754 [2024-07-22 18:36:01.718436] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.754 [2024-07-22 18:36:01.718448] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.754 [2024-07-22 18:36:01.718464] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.754 [2024-07-22 18:36:01.718476] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.754 [2024-07-22 18:36:01.718490] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.754 [2024-07-22 18:36:01.718501] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.754 [2024-07-22 18:36:01.718515] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.754 [2024-07-22 18:36:01.718527] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.754 [2024-07-22 18:36:01.718541] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.754 [2024-07-22 18:36:01.718553] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.754 [2024-07-22 18:36:01.718569] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.754 [2024-07-22 18:36:01.718580] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.754 [2024-07-22 18:36:01.718594] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.754 [2024-07-22 18:36:01.718606] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.754 [2024-07-22 18:36:01.718619] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.754 [2024-07-22 18:36:01.718631] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.754 [2024-07-22 18:36:01.718644] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.754 [2024-07-22 18:36:01.718656] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.754 [2024-07-22 18:36:01.718669] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.754 [2024-07-22 18:36:01.718681] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.754 [2024-07-22 18:36:01.718694] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.754 [2024-07-22 18:36:01.718706] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.754 [2024-07-22 18:36:01.718720] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.754 [2024-07-22 18:36:01.718751] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.754 [2024-07-22 18:36:01.718768] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.754 [2024-07-22 18:36:01.718782] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.754 [2024-07-22 18:36:01.718800] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.755 [2024-07-22 18:36:01.718812] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.755 [2024-07-22 18:36:01.718826] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.755 [2024-07-22 18:36:01.718837] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.755 [2024-07-22 18:36:01.718862] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.755 [2024-07-22 18:36:01.718874] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.755 [2024-07-22 18:36:01.718888] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.755 [2024-07-22 18:36:01.718900] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.755 [2024-07-22 18:36:01.718926] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.755 [2024-07-22 18:36:01.718937] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.755 [2024-07-22 18:36:01.718951] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.755 [2024-07-22 18:36:01.718963] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.755 [2024-07-22 18:36:01.718977] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.755 [2024-07-22 18:36:01.718988] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.755 [2024-07-22 18:36:01.719002] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.755 [2024-07-22 18:36:01.719014] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.755 [2024-07-22 18:36:01.719030] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.755 [2024-07-22 18:36:01.719042] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.755 [2024-07-22 18:36:01.719056] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.755 [2024-07-22 18:36:01.719068] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.755 [2024-07-22 18:36:01.719082] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.755 [2024-07-22 18:36:01.719093] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.755 [2024-07-22 18:36:01.719119] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.755 [2024-07-22 18:36:01.719131] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.755 [2024-07-22 18:36:01.719146] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.755 [2024-07-22 18:36:01.719158] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.755 [2024-07-22 18:36:01.719172] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.755 [2024-07-22 18:36:01.719183] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.755 [2024-07-22 18:36:01.719197] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.755 [2024-07-22 18:36:01.719222] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.755 [2024-07-22 18:36:01.719241] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.755 [2024-07-22 18:36:01.719254] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:28:49.755 [2024-07-22 18:36:01.719358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:46520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.755 [2024-07-22 18:36:01.719413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.755 [2024-07-22 18:36:01.719466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:46528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.755 [2024-07-22 18:36:01.719485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.755 [2024-07-22 18:36:01.719508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:46536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.755 [2024-07-22 18:36:01.719534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.755 [2024-07-22 18:36:01.719555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:46544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.755 [2024-07-22 18:36:01.719571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.755 [2024-07-22 18:36:01.719592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:46552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.755 [2024-07-22 18:36:01.719608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.755 [2024-07-22 18:36:01.719629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:46560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.755 [2024-07-22 18:36:01.719645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.755 [2024-07-22 18:36:01.719669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:46568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.755 [2024-07-22 18:36:01.719685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.755 [2024-07-22 18:36:01.719705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:46576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.755 [2024-07-22 18:36:01.719721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.755 [2024-07-22 18:36:01.719741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:46584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.755 [2024-07-22 18:36:01.719756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.755 [2024-07-22 18:36:01.719777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:46592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.755 [2024-07-22 18:36:01.719792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.755 [2024-07-22 18:36:01.719812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:46600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.755 [2024-07-22 18:36:01.719838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.755 [2024-07-22 18:36:01.719858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:46608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.755 [2024-07-22 18:36:01.719873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.755 [2024-07-22 18:36:01.719893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:46616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.755 [2024-07-22 18:36:01.719909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.755 [2024-07-22 18:36:01.719931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:46624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.755 [2024-07-22 18:36:01.719947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.755 [2024-07-22 18:36:01.719970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:46632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.755 [2024-07-22 18:36:01.719986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.755 [2024-07-22 18:36:01.720006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:46640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.755 [2024-07-22 18:36:01.720022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.755 [2024-07-22 18:36:01.720044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:46648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.755 [2024-07-22 18:36:01.720062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.755 [2024-07-22 18:36:01.720083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:46656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.755 [2024-07-22 18:36:01.720098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.755 [2024-07-22 18:36:01.720119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:46664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.755 [2024-07-22 18:36:01.720134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.755 [2024-07-22 18:36:01.720155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:46672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.755 [2024-07-22 18:36:01.720170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.755 [2024-07-22 18:36:01.720192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:46680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.755 [2024-07-22 18:36:01.720222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.755 [2024-07-22 18:36:01.720247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:46688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.755 [2024-07-22 18:36:01.720264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.755 [2024-07-22 18:36:01.720287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:46696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.755 [2024-07-22 18:36:01.720303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.755 [2024-07-22 18:36:01.720323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:46704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.755 [2024-07-22 18:36:01.720338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.756 [2024-07-22 18:36:01.720359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:46712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.756 [2024-07-22 18:36:01.720375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.756 [2024-07-22 18:36:01.720395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:46720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.756 [2024-07-22 18:36:01.720410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.756 [2024-07-22 18:36:01.720432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:46728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.756 [2024-07-22 18:36:01.720448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.756 [2024-07-22 18:36:01.720468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:46736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.756 [2024-07-22 18:36:01.720484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.756 [2024-07-22 18:36:01.720504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:46744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.756 [2024-07-22 18:36:01.720519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.756 [2024-07-22 18:36:01.720539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:46752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.756 [2024-07-22 18:36:01.720555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.756 [2024-07-22 18:36:01.720578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:46760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.756 [2024-07-22 18:36:01.720594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.756 [2024-07-22 18:36:01.720614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:46768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.756 [2024-07-22 18:36:01.720629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.756 [2024-07-22 18:36:01.720651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:46776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.756 [2024-07-22 18:36:01.720667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.756 [2024-07-22 18:36:01.720688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:46784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.756 [2024-07-22 18:36:01.720703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.756 [2024-07-22 18:36:01.720735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:46792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.756 [2024-07-22 18:36:01.720750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.756 [2024-07-22 18:36:01.720771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:46800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.756 [2024-07-22 18:36:01.720786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.756 [2024-07-22 18:36:01.720806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:46808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.756 [2024-07-22 18:36:01.720822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.756 [2024-07-22 18:36:01.720843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:46816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.756 [2024-07-22 18:36:01.720859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.756 [2024-07-22 18:36:01.720881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:46824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.756 [2024-07-22 18:36:01.720897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.756 [2024-07-22 18:36:01.720918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:46832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.756 [2024-07-22 18:36:01.720934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.756 [2024-07-22 18:36:01.720954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:46840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.756 [2024-07-22 18:36:01.720970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.756 [2024-07-22 18:36:01.720990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:46848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.756 [2024-07-22 18:36:01.721006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.756 [2024-07-22 18:36:01.721026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:46856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.756 [2024-07-22 18:36:01.721041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.756 [2024-07-22 18:36:01.721061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:46864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.756 [2024-07-22 18:36:01.721076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.756 [2024-07-22 18:36:01.721108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:46872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.756 [2024-07-22 18:36:01.721123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.756 [2024-07-22 18:36:01.721144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:46880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.756 [2024-07-22 18:36:01.721159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.756 [2024-07-22 18:36:01.721182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:46888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.756 [2024-07-22 18:36:01.721197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.756 [2024-07-22 18:36:01.721231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:46896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.756 [2024-07-22 18:36:01.721247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.756 [2024-07-22 18:36:01.721268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:46904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.756 [2024-07-22 18:36:01.721299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.756 [2024-07-22 18:36:01.721321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:46912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.756 [2024-07-22 18:36:01.721336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.756 [2024-07-22 18:36:01.721357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:46920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.756 [2024-07-22 18:36:01.721373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.756 [2024-07-22 18:36:01.721393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:46928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.756 [2024-07-22 18:36:01.721408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.756 [2024-07-22 18:36:01.721431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:46936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.756 [2024-07-22 18:36:01.721446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.756 [2024-07-22 18:36:01.721466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:46944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.756 [2024-07-22 18:36:01.721482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.756 [2024-07-22 18:36:01.721516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:46952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.756 [2024-07-22 18:36:01.721532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.756 [2024-07-22 18:36:01.721561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:46960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.756 [2024-07-22 18:36:01.721583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.756 [2024-07-22 18:36:01.721605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:46968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.756 [2024-07-22 18:36:01.721621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.756 [2024-07-22 18:36:01.721641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:46976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.756 [2024-07-22 18:36:01.721657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.756 [2024-07-22 18:36:01.721677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:46984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.756 [2024-07-22 18:36:01.721693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.756 [2024-07-22 18:36:01.721723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:46992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.756 [2024-07-22 18:36:01.721738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.756 [2024-07-22 18:36:01.721759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:47000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.756 [2024-07-22 18:36:01.721774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.756 [2024-07-22 18:36:01.721794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:47008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.756 [2024-07-22 18:36:01.721809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.756 [2024-07-22 18:36:01.721832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:47016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.756 [2024-07-22 18:36:01.721848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.756 [2024-07-22 18:36:01.721869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:47024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.757 [2024-07-22 18:36:01.721885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.757 [2024-07-22 18:36:01.721923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:47032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.757 [2024-07-22 18:36:01.721941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.757 [2024-07-22 18:36:01.721963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:47040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.757 [2024-07-22 18:36:01.721978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.757 [2024-07-22 18:36:01.721999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:47048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.757 [2024-07-22 18:36:01.722014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.757 [2024-07-22 18:36:01.722034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:47056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.757 [2024-07-22 18:36:01.722049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.757 [2024-07-22 18:36:01.722069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:47064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.757 [2024-07-22 18:36:01.722084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.757 [2024-07-22 18:36:01.722113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:47072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.757 [2024-07-22 18:36:01.722129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.757 [2024-07-22 18:36:01.722152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:47080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.757 [2024-07-22 18:36:01.722167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.757 [2024-07-22 18:36:01.722198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:47088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.757 [2024-07-22 18:36:01.722238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.757 [2024-07-22 18:36:01.722275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:47096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.757 [2024-07-22 18:36:01.722295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.757 [2024-07-22 18:36:01.722316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:47104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.757 [2024-07-22 18:36:01.722331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.757 [2024-07-22 18:36:01.722352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:47112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.757 [2024-07-22 18:36:01.722375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.757 [2024-07-22 18:36:01.722396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:47120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.757 [2024-07-22 18:36:01.722411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.757 [2024-07-22 18:36:01.722432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:47128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.757 [2024-07-22 18:36:01.722447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.757 [2024-07-22 18:36:01.722468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:47136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.757 [2024-07-22 18:36:01.722483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.757 [2024-07-22 18:36:01.722507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:47144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.757 [2024-07-22 18:36:01.722523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.757 [2024-07-22 18:36:01.722544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:47152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.757 [2024-07-22 18:36:01.722560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.757 [2024-07-22 18:36:01.722580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:47160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.757 [2024-07-22 18:36:01.722605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.757 [2024-07-22 18:36:01.722626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:47168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.757 [2024-07-22 18:36:01.722642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.757 [2024-07-22 18:36:01.722663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:47176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.757 [2024-07-22 18:36:01.722678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.757 [2024-07-22 18:36:01.722698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:47184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.757 [2024-07-22 18:36:01.722713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.757 [2024-07-22 18:36:01.722733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:47192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.757 [2024-07-22 18:36:01.722748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.757 [2024-07-22 18:36:01.722768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:47200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.757 [2024-07-22 18:36:01.722784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.757 [2024-07-22 18:36:01.722807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:47208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.757 [2024-07-22 18:36:01.722822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.757 [2024-07-22 18:36:01.722843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:47216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.757 [2024-07-22 18:36:01.722858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.757 [2024-07-22 18:36:01.722878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:47224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.757 [2024-07-22 18:36:01.722893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.757 [2024-07-22 18:36:01.722914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:47232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.757 [2024-07-22 18:36:01.722929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.757 [2024-07-22 18:36:01.722950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:47240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.757 [2024-07-22 18:36:01.722965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.757 [2024-07-22 18:36:01.722985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:47248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.757 [2024-07-22 18:36:01.723000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.757 [2024-07-22 18:36:01.723023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:47272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.757 [2024-07-22 18:36:01.723038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.757 [2024-07-22 18:36:01.723070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:47280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.757 [2024-07-22 18:36:01.723085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.757 [2024-07-22 18:36:01.723108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:47288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.757 [2024-07-22 18:36:01.723124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.757 [2024-07-22 18:36:01.723145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:47296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.757 [2024-07-22 18:36:01.723161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.757 [2024-07-22 18:36:01.723181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:47304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.757 [2024-07-22 18:36:01.723201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.757 [2024-07-22 18:36:01.723241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:47312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.757 [2024-07-22 18:36:01.723257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.757 [2024-07-22 18:36:01.723278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:47320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.757 [2024-07-22 18:36:01.723293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.757 [2024-07-22 18:36:01.723313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:47328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.757 [2024-07-22 18:36:01.723329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.757 [2024-07-22 18:36:01.723348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:47336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.757 [2024-07-22 18:36:01.723375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.757 [2024-07-22 18:36:01.723395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:47344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.757 [2024-07-22 18:36:01.723410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.757 [2024-07-22 18:36:01.723433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:47352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.758 [2024-07-22 18:36:01.723449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.758 [2024-07-22 18:36:01.723469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:47360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.758 [2024-07-22 18:36:01.723490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.758 [2024-07-22 18:36:01.723510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:47368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.758 [2024-07-22 18:36:01.723525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.758 [2024-07-22 18:36:01.723547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:47376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.758 [2024-07-22 18:36:01.723563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.758 [2024-07-22 18:36:01.723583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:47384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:49.758 [2024-07-22 18:36:01.723599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.758 [2024-07-22 18:36:01.723619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:47256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.758 [2024-07-22 18:36:01.723634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.758 [2024-07-22 18:36:01.723653] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(5) to be set 00:28:49.758 [2024-07-22 18:36:01.723676] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.758 [2024-07-22 18:36:01.723697] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.758 [2024-07-22 18:36:01.723713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47264 len:8 PRP1 0x0 PRP2 0x0 00:28:49.758 [2024-07-22 18:36:01.723734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.758 [2024-07-22 18:36:01.723752] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.758 [2024-07-22 18:36:01.723768] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.758 [2024-07-22 18:36:01.723782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47392 len:8 PRP1 0x0 PRP2 0x0 00:28:49.758 [2024-07-22 18:36:01.723799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.758 [2024-07-22 18:36:01.723831] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.758 [2024-07-22 18:36:01.723848] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.758 [2024-07-22 18:36:01.723861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47400 len:8 PRP1 0x0 PRP2 0x0 00:28:49.758 [2024-07-22 18:36:01.723879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.758 [2024-07-22 18:36:01.723893] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.758 [2024-07-22 18:36:01.723908] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.758 [2024-07-22 18:36:01.723921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47408 len:8 PRP1 0x0 PRP2 0x0 00:28:49.758 [2024-07-22 18:36:01.723938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.758 [2024-07-22 18:36:01.723952] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.758 [2024-07-22 18:36:01.723969] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.758 [2024-07-22 18:36:01.723983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47416 len:8 PRP1 0x0 PRP2 0x0 00:28:49.758 [2024-07-22 18:36:01.724003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.758 [2024-07-22 18:36:01.724017] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.758 [2024-07-22 18:36:01.724032] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.758 [2024-07-22 18:36:01.724046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47424 len:8 PRP1 0x0 PRP2 0x0 00:28:49.758 [2024-07-22 18:36:01.724063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.758 [2024-07-22 18:36:01.724077] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.758 [2024-07-22 18:36:01.724092] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.758 [2024-07-22 18:36:01.724105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47432 len:8 PRP1 0x0 PRP2 0x0 00:28:49.758 [2024-07-22 18:36:01.724122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.758 [2024-07-22 18:36:01.724136] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.758 [2024-07-22 18:36:01.724151] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.758 [2024-07-22 18:36:01.724164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47440 len:8 PRP1 0x0 PRP2 0x0 00:28:49.758 [2024-07-22 18:36:01.724181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.758 [2024-07-22 18:36:01.724195] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.758 [2024-07-22 18:36:01.724223] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.758 [2024-07-22 18:36:01.724240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47448 len:8 PRP1 0x0 PRP2 0x0 00:28:49.758 [2024-07-22 18:36:01.724260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.758 [2024-07-22 18:36:01.724276] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.758 [2024-07-22 18:36:01.724291] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.758 [2024-07-22 18:36:01.724305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47456 len:8 PRP1 0x0 PRP2 0x0 00:28:49.758 [2024-07-22 18:36:01.724322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.758 [2024-07-22 18:36:01.724341] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.758 [2024-07-22 18:36:01.724357] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.758 [2024-07-22 18:36:01.724370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47464 len:8 PRP1 0x0 PRP2 0x0 00:28:49.758 [2024-07-22 18:36:01.724389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.758 [2024-07-22 18:36:01.724404] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.758 [2024-07-22 18:36:01.724418] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.758 [2024-07-22 18:36:01.724432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47472 len:8 PRP1 0x0 PRP2 0x0 00:28:49.758 [2024-07-22 18:36:01.724449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.758 [2024-07-22 18:36:01.724463] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.758 [2024-07-22 18:36:01.724478] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.758 [2024-07-22 18:36:01.724491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47480 len:8 PRP1 0x0 PRP2 0x0 00:28:49.758 [2024-07-22 18:36:01.724510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.758 [2024-07-22 18:36:01.724525] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.758 [2024-07-22 18:36:01.724539] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.758 [2024-07-22 18:36:01.724553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47488 len:8 PRP1 0x0 PRP2 0x0 00:28:49.758 [2024-07-22 18:36:01.724570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.758 [2024-07-22 18:36:01.724584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.758 [2024-07-22 18:36:01.724599] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.758 [2024-07-22 18:36:01.724612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47496 len:8 PRP1 0x0 PRP2 0x0 00:28:49.759 [2024-07-22 18:36:01.724629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.759 [2024-07-22 18:36:01.724643] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.759 [2024-07-22 18:36:01.724658] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.759 [2024-07-22 18:36:01.739314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47504 len:8 PRP1 0x0 PRP2 0x0 00:28:49.759 [2024-07-22 18:36:01.739373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.759 [2024-07-22 18:36:01.739407] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.759 [2024-07-22 18:36:01.739427] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.759 [2024-07-22 18:36:01.739443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47512 len:8 PRP1 0x0 PRP2 0x0 00:28:49.759 [2024-07-22 18:36:01.739464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.759 [2024-07-22 18:36:01.739480] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.759 [2024-07-22 18:36:01.739495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.759 [2024-07-22 18:36:01.739509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47520 len:8 PRP1 0x0 PRP2 0x0 00:28:49.759 [2024-07-22 18:36:01.739536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.759 [2024-07-22 18:36:01.739551] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.759 [2024-07-22 18:36:01.739566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.759 [2024-07-22 18:36:01.739581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47528 len:8 PRP1 0x0 PRP2 0x0 00:28:49.759 [2024-07-22 18:36:01.739598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.759 [2024-07-22 18:36:01.739612] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:49.759 [2024-07-22 18:36:01.739627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:49.759 [2024-07-22 18:36:01.739640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47536 len:8 PRP1 0x0 PRP2 0x0 00:28:49.759 [2024-07-22 18:36:01.739658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.759 [2024-07-22 18:36:01.739954] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b000 was disconnected and freed. reset controller. 00:28:49.759 [2024-07-22 18:36:01.740125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.759 [2024-07-22 18:36:01.740179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.759 [2024-07-22 18:36:01.740202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.759 [2024-07-22 18:36:01.740243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.759 [2024-07-22 18:36:01.740260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.759 [2024-07-22 18:36:01.740277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.759 [2024-07-22 18:36:01.740293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:49.759 [2024-07-22 18:36:01.740310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:49.759 [2024-07-22 18:36:01.740324] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:28:49.759 [2024-07-22 18:36:01.740628] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.759 [2024-07-22 18:36:01.740681] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:28:49.759 [2024-07-22 18:36:01.740843] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.759 [2024-07-22 18:36:01.740892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:28:49.759 [2024-07-22 18:36:01.740914] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:28:49.759 [2024-07-22 18:36:01.740950] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:28:49.759 [2024-07-22 18:36:01.740979] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.759 [2024-07-22 18:36:01.741007] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.759 [2024-07-22 18:36:01.741025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.759 [2024-07-22 18:36:01.741064] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.759 [2024-07-22 18:36:01.741083] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.759 18:36:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:28:51.130 [2024-07-22 18:36:02.741353] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.130 [2024-07-22 18:36:02.741458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:28:51.130 [2024-07-22 18:36:02.741488] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:28:51.130 [2024-07-22 18:36:02.741539] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:28:51.130 [2024-07-22 18:36:02.741571] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.130 [2024-07-22 18:36:02.741592] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.130 [2024-07-22 18:36:02.741610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.130 [2024-07-22 18:36:02.741659] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.130 [2024-07-22 18:36:02.741689] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.130 18:36:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:51.130 [2024-07-22 18:36:02.985881] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:51.130 18:36:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 89442 00:28:52.064 [2024-07-22 18:36:03.755708] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:58.674 00:28:58.674 Latency(us) 00:28:58.674 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:58.674 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:58.674 Verification LBA range: start 0x0 length 0x4000 00:28:58.674 NVMe0n1 : 10.01 4653.57 18.18 0.00 0.00 27451.51 1936.29 3050402.91 00:28:58.674 =================================================================================================================== 00:28:58.674 Total : 4653.57 18.18 0.00 0.00 27451.51 1936.29 3050402.91 00:28:58.674 0 00:28:58.674 18:36:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=89546 00:28:58.674 18:36:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:58.674 18:36:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:28:58.674 Running I/O for 10 seconds... 00:28:59.609 18:36:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:59.896 [2024-07-22 18:36:11.867411] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:28:59.896 [2024-07-22 18:36:11.867506] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:28:59.896 [2024-07-22 18:36:11.867525] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:28:59.896 [2024-07-22 18:36:11.867687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:60464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.896 [2024-07-22 18:36:11.867730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.896 [2024-07-22 18:36:11.867769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:60472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.896 [2024-07-22 18:36:11.867787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.896 [2024-07-22 18:36:11.867807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:60480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.896 [2024-07-22 18:36:11.867823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.896 [2024-07-22 18:36:11.867842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:60488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.896 [2024-07-22 18:36:11.867857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.896 [2024-07-22 18:36:11.867877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:61008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.896 [2024-07-22 18:36:11.867893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.896 [2024-07-22 18:36:11.867911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:61016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.896 [2024-07-22 18:36:11.867926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.896 [2024-07-22 18:36:11.867945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:61024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.896 [2024-07-22 18:36:11.867960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.896 [2024-07-22 18:36:11.867978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:61032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.896 [2024-07-22 18:36:11.867993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.896 [2024-07-22 18:36:11.868012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:61040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.896 [2024-07-22 18:36:11.868027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.896 [2024-07-22 18:36:11.868047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:61048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.896 [2024-07-22 18:36:11.868090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.896 [2024-07-22 18:36:11.868110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:61056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.896 [2024-07-22 18:36:11.868125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.896 [2024-07-22 18:36:11.868144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:61064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.896 [2024-07-22 18:36:11.868158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.896 [2024-07-22 18:36:11.868176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:60496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.896 [2024-07-22 18:36:11.868191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.896 [2024-07-22 18:36:11.868227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:60504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.896 [2024-07-22 18:36:11.868245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.896 [2024-07-22 18:36:11.868263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:60512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.896 [2024-07-22 18:36:11.868278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.896 [2024-07-22 18:36:11.868296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:60520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.896 [2024-07-22 18:36:11.868310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.896 [2024-07-22 18:36:11.868328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:60528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.896 [2024-07-22 18:36:11.868345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.896 [2024-07-22 18:36:11.868363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:60536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.896 [2024-07-22 18:36:11.868379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.896 [2024-07-22 18:36:11.868397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:60544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.896 [2024-07-22 18:36:11.868412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.896 [2024-07-22 18:36:11.868431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:60552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.896 [2024-07-22 18:36:11.868446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.896 [2024-07-22 18:36:11.868465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:60560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.896 [2024-07-22 18:36:11.868480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.896 [2024-07-22 18:36:11.868500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:60568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.896 [2024-07-22 18:36:11.868515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.896 [2024-07-22 18:36:11.868534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:60576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.896 [2024-07-22 18:36:11.868549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.896 [2024-07-22 18:36:11.868567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:60584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.896 [2024-07-22 18:36:11.868581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.897 [2024-07-22 18:36:11.868600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:60592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.897 [2024-07-22 18:36:11.868615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.897 [2024-07-22 18:36:11.868633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:60600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.897 [2024-07-22 18:36:11.868648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.897 [2024-07-22 18:36:11.868665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:60608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.897 [2024-07-22 18:36:11.868680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.897 [2024-07-22 18:36:11.868698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:60616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.897 [2024-07-22 18:36:11.868712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.897 [2024-07-22 18:36:11.868730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:61072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.897 [2024-07-22 18:36:11.868745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.897 [2024-07-22 18:36:11.868762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:61080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.897 [2024-07-22 18:36:11.868777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.897 [2024-07-22 18:36:11.868795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:61088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.897 [2024-07-22 18:36:11.868810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.897 [2024-07-22 18:36:11.868828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:61096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.897 [2024-07-22 18:36:11.868842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.897 [2024-07-22 18:36:11.868860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:61104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.897 [2024-07-22 18:36:11.868878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.897 [2024-07-22 18:36:11.868896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:61112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.897 [2024-07-22 18:36:11.868910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.897 [2024-07-22 18:36:11.868928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:61120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.897 [2024-07-22 18:36:11.868943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.897 [2024-07-22 18:36:11.868967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:61128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.897 [2024-07-22 18:36:11.868982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.897 [2024-07-22 18:36:11.869000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:60624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.897 [2024-07-22 18:36:11.869015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.897 [2024-07-22 18:36:11.869033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:60632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.897 [2024-07-22 18:36:11.869048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.897 [2024-07-22 18:36:11.869066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:60640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.897 [2024-07-22 18:36:11.869080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.897 [2024-07-22 18:36:11.869098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:60648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.897 [2024-07-22 18:36:11.869113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.897 [2024-07-22 18:36:11.869136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:60656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.897 [2024-07-22 18:36:11.869151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.897 [2024-07-22 18:36:11.869169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:60664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.897 [2024-07-22 18:36:11.869184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.897 [2024-07-22 18:36:11.869202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:60672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.897 [2024-07-22 18:36:11.869230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.897 [2024-07-22 18:36:11.869249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:60680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.897 [2024-07-22 18:36:11.869264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.897 [2024-07-22 18:36:11.869282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:60688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.897 [2024-07-22 18:36:11.869297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.897 [2024-07-22 18:36:11.869315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:60696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.897 [2024-07-22 18:36:11.869331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.897 [2024-07-22 18:36:11.869351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:60704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.897 [2024-07-22 18:36:11.869366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.897 [2024-07-22 18:36:11.869384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:60712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.897 [2024-07-22 18:36:11.869398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.897 [2024-07-22 18:36:11.869416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:60720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.897 [2024-07-22 18:36:11.869435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.897 [2024-07-22 18:36:11.869454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:60728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.897 [2024-07-22 18:36:11.869469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.897 [2024-07-22 18:36:11.869487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:60736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.897 [2024-07-22 18:36:11.869502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.897 [2024-07-22 18:36:11.869519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:60744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.897 [2024-07-22 18:36:11.869534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.897 [2024-07-22 18:36:11.869551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:61136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.897 [2024-07-22 18:36:11.869566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.897 [2024-07-22 18:36:11.869583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:61144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.897 [2024-07-22 18:36:11.869599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.897 [2024-07-22 18:36:11.869617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:61152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.897 [2024-07-22 18:36:11.869631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.897 [2024-07-22 18:36:11.869649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:61160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.897 [2024-07-22 18:36:11.869663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.897 [2024-07-22 18:36:11.869681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:61168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.897 [2024-07-22 18:36:11.869696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.897 [2024-07-22 18:36:11.869720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:61176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.897 [2024-07-22 18:36:11.869734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.897 [2024-07-22 18:36:11.869752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:61184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.897 [2024-07-22 18:36:11.869766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.897 [2024-07-22 18:36:11.869784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:61192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.897 [2024-07-22 18:36:11.869799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.897 [2024-07-22 18:36:11.869817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:60752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.897 [2024-07-22 18:36:11.869832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.897 [2024-07-22 18:36:11.869850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:60760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.897 [2024-07-22 18:36:11.869865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.897 [2024-07-22 18:36:11.869883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:60768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.897 [2024-07-22 18:36:11.869908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.898 [2024-07-22 18:36:11.869928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:60776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.898 [2024-07-22 18:36:11.869943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.898 [2024-07-22 18:36:11.869962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:60784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.898 [2024-07-22 18:36:11.869977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.898 [2024-07-22 18:36:11.869996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:60792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.898 [2024-07-22 18:36:11.870011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.898 [2024-07-22 18:36:11.870029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:60800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.898 [2024-07-22 18:36:11.870044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.898 [2024-07-22 18:36:11.870070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:60808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.898 [2024-07-22 18:36:11.870085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.898 [2024-07-22 18:36:11.870102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:60816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.898 [2024-07-22 18:36:11.870117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.898 [2024-07-22 18:36:11.870135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:60824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.898 [2024-07-22 18:36:11.870150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.898 [2024-07-22 18:36:11.870169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:60832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.898 [2024-07-22 18:36:11.870184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.898 [2024-07-22 18:36:11.870202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:60840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.898 [2024-07-22 18:36:11.870229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.898 [2024-07-22 18:36:11.870247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:60848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.898 [2024-07-22 18:36:11.870263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.898 [2024-07-22 18:36:11.870281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:60856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.898 [2024-07-22 18:36:11.870310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.898 [2024-07-22 18:36:11.870329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:60864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.898 [2024-07-22 18:36:11.870344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.898 [2024-07-22 18:36:11.870362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:60872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.898 [2024-07-22 18:36:11.870377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.898 [2024-07-22 18:36:11.870395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:60880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.898 [2024-07-22 18:36:11.870409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.898 [2024-07-22 18:36:11.870427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:60888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.898 [2024-07-22 18:36:11.870442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.898 [2024-07-22 18:36:11.870459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:60896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.898 [2024-07-22 18:36:11.870475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.898 [2024-07-22 18:36:11.870493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:60904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.898 [2024-07-22 18:36:11.870508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.898 [2024-07-22 18:36:11.870525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:60912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.898 [2024-07-22 18:36:11.870541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.898 [2024-07-22 18:36:11.870560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:60920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.898 [2024-07-22 18:36:11.870575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.898 [2024-07-22 18:36:11.870593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:60928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.898 [2024-07-22 18:36:11.870608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.898 [2024-07-22 18:36:11.870625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:60936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.898 [2024-07-22 18:36:11.870640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.898 [2024-07-22 18:36:11.870658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:61200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.898 [2024-07-22 18:36:11.870673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.898 [2024-07-22 18:36:11.870691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:61208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.898 [2024-07-22 18:36:11.870706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.898 [2024-07-22 18:36:11.870724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:61216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.898 [2024-07-22 18:36:11.870740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.898 [2024-07-22 18:36:11.870758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:61224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.898 [2024-07-22 18:36:11.870773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.898 [2024-07-22 18:36:11.870791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:61232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.898 [2024-07-22 18:36:11.870806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.898 [2024-07-22 18:36:11.870823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:61240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.898 [2024-07-22 18:36:11.870838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.898 [2024-07-22 18:36:11.870856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:61248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.898 [2024-07-22 18:36:11.870871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.898 [2024-07-22 18:36:11.870889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:61256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.898 [2024-07-22 18:36:11.870903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.898 [2024-07-22 18:36:11.870921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:61264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.898 [2024-07-22 18:36:11.870936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.898 [2024-07-22 18:36:11.870953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:61272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.898 [2024-07-22 18:36:11.870968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.898 [2024-07-22 18:36:11.870993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:61280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.898 [2024-07-22 18:36:11.871007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.898 [2024-07-22 18:36:11.871025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:61288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.898 [2024-07-22 18:36:11.871039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.898 [2024-07-22 18:36:11.871057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:61296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.898 [2024-07-22 18:36:11.871082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.898 [2024-07-22 18:36:11.871113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:61304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.898 [2024-07-22 18:36:11.871129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.898 [2024-07-22 18:36:11.871148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:61312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.898 [2024-07-22 18:36:11.871162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.898 [2024-07-22 18:36:11.871180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:61320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.898 [2024-07-22 18:36:11.871194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.898 [2024-07-22 18:36:11.871226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:60944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.898 [2024-07-22 18:36:11.871244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.898 [2024-07-22 18:36:11.871262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:60952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.898 [2024-07-22 18:36:11.871277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.899 [2024-07-22 18:36:11.871295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:60960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.899 [2024-07-22 18:36:11.871310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.899 [2024-07-22 18:36:11.871328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:60968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.899 [2024-07-22 18:36:11.871342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.899 [2024-07-22 18:36:11.871360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:60976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.899 [2024-07-22 18:36:11.871374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.899 [2024-07-22 18:36:11.871392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:60984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.899 [2024-07-22 18:36:11.871407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.899 [2024-07-22 18:36:11.871425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:60992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.899 [2024-07-22 18:36:11.871439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.899 [2024-07-22 18:36:11.871457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:61000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.899 [2024-07-22 18:36:11.871472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.899 [2024-07-22 18:36:11.871490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:61328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.899 [2024-07-22 18:36:11.871506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.899 [2024-07-22 18:36:11.871523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.899 [2024-07-22 18:36:11.871538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.899 [2024-07-22 18:36:11.871555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:61344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.899 [2024-07-22 18:36:11.871570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.899 [2024-07-22 18:36:11.871587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:61352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.899 [2024-07-22 18:36:11.871602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.899 [2024-07-22 18:36:11.871620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:61360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.899 [2024-07-22 18:36:11.871640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.899 [2024-07-22 18:36:11.871658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.899 [2024-07-22 18:36:11.871674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.899 [2024-07-22 18:36:11.871692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:61376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.899 [2024-07-22 18:36:11.871716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.899 [2024-07-22 18:36:11.871734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:61384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.899 [2024-07-22 18:36:11.871748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.899 [2024-07-22 18:36:11.871766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:61392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.899 [2024-07-22 18:36:11.871780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.899 [2024-07-22 18:36:11.871807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:61400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.899 [2024-07-22 18:36:11.871822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.899 [2024-07-22 18:36:11.871840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:61408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.899 [2024-07-22 18:36:11.871855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.899 [2024-07-22 18:36:11.871872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:61416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.899 [2024-07-22 18:36:11.871886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.899 [2024-07-22 18:36:11.871904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:61424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.899 [2024-07-22 18:36:11.871918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.899 [2024-07-22 18:36:11.871936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:61432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.899 [2024-07-22 18:36:11.871953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.899 [2024-07-22 18:36:11.871971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:61440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.899 [2024-07-22 18:36:11.871985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.899 [2024-07-22 18:36:11.872003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:61448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.899 [2024-07-22 18:36:11.872017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.899 [2024-07-22 18:36:11.872035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:61456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.899 [2024-07-22 18:36:11.872050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.899 [2024-07-22 18:36:11.872069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:61464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.899 [2024-07-22 18:36:11.872084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.899 [2024-07-22 18:36:11.872101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:61472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.899 [2024-07-22 18:36:11.872116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.899 [2024-07-22 18:36:11.872132] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b780 is same with the state(5) to be set 00:28:59.899 [2024-07-22 18:36:11.872153] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:59.899 [2024-07-22 18:36:11.872167] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:59.899 [2024-07-22 18:36:11.872187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61480 len:8 PRP1 0x0 PRP2 0x0 00:28:59.899 [2024-07-22 18:36:11.872202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.899 [2024-07-22 18:36:11.872506] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b780 was disconnected and freed. reset controller. 00:28:59.899 [2024-07-22 18:36:11.872810] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.899 [2024-07-22 18:36:11.872954] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:28:59.899 [2024-07-22 18:36:11.873118] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.899 [2024-07-22 18:36:11.873155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:28:59.899 [2024-07-22 18:36:11.873173] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:28:59.899 [2024-07-22 18:36:11.873203] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:28:59.899 [2024-07-22 18:36:11.873254] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:59.899 [2024-07-22 18:36:11.873271] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:59.899 [2024-07-22 18:36:11.873288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:59.899 [2024-07-22 18:36:11.873321] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:59.899 [2024-07-22 18:36:11.873339] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:59.899 18:36:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:29:01.274 [2024-07-22 18:36:12.873558] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.274 [2024-07-22 18:36:12.873653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:29:01.274 [2024-07-22 18:36:12.873682] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:29:01.274 [2024-07-22 18:36:12.873727] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:29:01.274 [2024-07-22 18:36:12.873778] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:01.274 [2024-07-22 18:36:12.873796] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:01.274 [2024-07-22 18:36:12.873814] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.274 [2024-07-22 18:36:12.873857] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:01.274 [2024-07-22 18:36:12.873876] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:02.208 [2024-07-22 18:36:13.874100] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.208 [2024-07-22 18:36:13.874193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:29:02.208 [2024-07-22 18:36:13.874231] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:29:02.208 [2024-07-22 18:36:13.874275] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:29:02.208 [2024-07-22 18:36:13.874306] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:02.208 [2024-07-22 18:36:13.874323] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:02.208 [2024-07-22 18:36:13.874340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:02.208 [2024-07-22 18:36:13.874382] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.208 [2024-07-22 18:36:13.874401] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.143 [2024-07-22 18:36:14.878075] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.143 [2024-07-22 18:36:14.878169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:29:03.143 [2024-07-22 18:36:14.878194] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:29:03.143 [2024-07-22 18:36:14.878494] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:29:03.143 [2024-07-22 18:36:14.878782] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.143 [2024-07-22 18:36:14.878812] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.143 [2024-07-22 18:36:14.878832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.143 [2024-07-22 18:36:14.882911] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.143 [2024-07-22 18:36:14.882951] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.143 18:36:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:03.401 [2024-07-22 18:36:15.162013] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:03.401 18:36:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 89546 00:29:03.968 [2024-07-22 18:36:15.915528] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:09.265 00:29:09.265 Latency(us) 00:29:09.265 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:09.265 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:29:09.265 Verification LBA range: start 0x0 length 0x4000 00:29:09.265 NVMe0n1 : 10.01 4150.17 16.21 3506.41 0.00 16679.62 703.77 3019898.88 00:29:09.265 =================================================================================================================== 00:29:09.265 Total : 4150.17 16.21 3506.41 0.00 16679.62 0.00 3019898.88 00:29:09.265 0 00:29:09.265 18:36:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 89418 00:29:09.265 18:36:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 89418 ']' 00:29:09.265 18:36:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 89418 00:29:09.265 18:36:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:29:09.265 18:36:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:09.265 18:36:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89418 00:29:09.265 killing process with pid 89418 00:29:09.265 Received shutdown signal, test time was about 10.000000 seconds 00:29:09.265 00:29:09.265 Latency(us) 00:29:09.265 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:09.265 =================================================================================================================== 00:29:09.265 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:09.265 18:36:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:29:09.265 18:36:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:29:09.265 18:36:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89418' 00:29:09.265 18:36:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 89418 00:29:09.265 18:36:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 89418 00:29:10.199 18:36:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=89663 00:29:10.199 18:36:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:29:10.199 18:36:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 89663 /var/tmp/bdevperf.sock 00:29:10.199 18:36:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 89663 ']' 00:29:10.199 18:36:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:10.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:10.200 18:36:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:10.200 18:36:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:10.200 18:36:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:10.200 18:36:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:10.200 [2024-07-22 18:36:21.964913] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:29:10.200 [2024-07-22 18:36:21.965146] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89663 ] 00:29:10.200 [2024-07-22 18:36:22.143027] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:10.459 [2024-07-22 18:36:22.392646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:10.717 [2024-07-22 18:36:22.601039] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:29:10.975 18:36:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:10.975 18:36:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:29:10.975 18:36:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=89678 00:29:10.975 18:36:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 89663 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:29:10.975 18:36:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:29:11.239 18:36:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:29:11.500 NVMe0n1 00:29:11.500 18:36:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=89721 00:29:11.500 18:36:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:11.500 18:36:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:29:11.757 Running I/O for 10 seconds... 00:29:12.691 18:36:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:12.952 [2024-07-22 18:36:24.772850] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.952 [2024-07-22 18:36:24.772913] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.952 [2024-07-22 18:36:24.772932] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.952 [2024-07-22 18:36:24.772946] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.952 [2024-07-22 18:36:24.772963] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.952 [2024-07-22 18:36:24.772976] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.952 [2024-07-22 18:36:24.772990] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.952 [2024-07-22 18:36:24.773002] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.952 [2024-07-22 18:36:24.773031] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.952 [2024-07-22 18:36:24.773055] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.952 [2024-07-22 18:36:24.773069] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.952 [2024-07-22 18:36:24.773082] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.952 [2024-07-22 18:36:24.773096] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.952 [2024-07-22 18:36:24.773108] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.952 [2024-07-22 18:36:24.773122] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.952 [2024-07-22 18:36:24.773134] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.952 [2024-07-22 18:36:24.773148] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.952 [2024-07-22 18:36:24.773161] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.952 [2024-07-22 18:36:24.773175] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.952 [2024-07-22 18:36:24.773187] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.952 [2024-07-22 18:36:24.773215] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.952 [2024-07-22 18:36:24.773230] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.952 [2024-07-22 18:36:24.773246] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.952 [2024-07-22 18:36:24.773258] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.952 [2024-07-22 18:36:24.773273] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.952 [2024-07-22 18:36:24.773285] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.952 [2024-07-22 18:36:24.773299] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.952 [2024-07-22 18:36:24.773319] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.952 [2024-07-22 18:36:24.773333] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.952 [2024-07-22 18:36:24.773345] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.952 [2024-07-22 18:36:24.773360] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.952 [2024-07-22 18:36:24.773372] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.952 [2024-07-22 18:36:24.773386] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.952 [2024-07-22 18:36:24.773398] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.952 [2024-07-22 18:36:24.773415] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.952 [2024-07-22 18:36:24.773427] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.952 [2024-07-22 18:36:24.773446] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.952 [2024-07-22 18:36:24.773458] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.952 [2024-07-22 18:36:24.773473] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.952 [2024-07-22 18:36:24.773485] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.952 [2024-07-22 18:36:24.773499] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.952 [2024-07-22 18:36:24.773511] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.952 [2024-07-22 18:36:24.773525] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.952 [2024-07-22 18:36:24.773537] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.952 [2024-07-22 18:36:24.773551] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.952 [2024-07-22 18:36:24.773563] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.952 [2024-07-22 18:36:24.773577] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.952 [2024-07-22 18:36:24.773589] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.952 [2024-07-22 18:36:24.773603] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.952 [2024-07-22 18:36:24.773616] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.952 [2024-07-22 18:36:24.773629] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.952 [2024-07-22 18:36:24.773641] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.952 [2024-07-22 18:36:24.773658] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.952 [2024-07-22 18:36:24.773671] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.952 [2024-07-22 18:36:24.773685] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.952 [2024-07-22 18:36:24.773697] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.952 [2024-07-22 18:36:24.773724] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.952 [2024-07-22 18:36:24.773736] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.773749] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.773780] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.773798] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.773810] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.773824] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.773836] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.773850] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.773863] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.773878] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.773890] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.773919] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.773933] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.773948] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.773960] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.773975] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.773987] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.774001] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.774013] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.774027] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.774039] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.774053] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.774066] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.774080] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.774092] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.774107] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.774120] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.774137] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.774149] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.774165] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.774177] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.774191] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.774203] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.774231] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.774244] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.774259] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.774271] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.774294] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.774307] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.774321] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.774333] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.774349] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.774362] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.774378] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.774390] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.774404] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.774417] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.774431] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.774444] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.774427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:12.953 [2024-07-22 18:36:24.774458] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.774471] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.774477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.953 [2024-07-22 18:36:24.774485] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.774498] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.774509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:12.953 [2024-07-22 18:36:24.774520] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.774528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.953 [2024-07-22 18:36:24.774533] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.774544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:12.953 [2024-07-22 18:36:24.774547] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.774560] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.774562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.953 [2024-07-22 18:36:24.774575] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.774577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:12.953 [2024-07-22 18:36:24.774588] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.774599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.953 [2024-07-22 18:36:24.774605] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.774614] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is [2024-07-22 18:36:24.774617] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same same with the state(5) to be set 00:29:12.953 with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.774632] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.774644] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.774658] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.774671] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.774685] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.774697] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.774723] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.774736] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(5) to be set 00:29:12.953 [2024-07-22 18:36:24.774829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:100600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.953 [2024-07-22 18:36:24.774854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.953 [2024-07-22 18:36:24.774890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.953 [2024-07-22 18:36:24.774907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.954 [2024-07-22 18:36:24.774928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:56168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.954 [2024-07-22 18:36:24.774943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.954 [2024-07-22 18:36:24.774963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:43248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.954 [2024-07-22 18:36:24.774978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.954 [2024-07-22 18:36:24.774998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.954 [2024-07-22 18:36:24.775012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.954 [2024-07-22 18:36:24.775039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.954 [2024-07-22 18:36:24.775054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.954 [2024-07-22 18:36:24.775073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:92648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.954 [2024-07-22 18:36:24.775088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.954 [2024-07-22 18:36:24.775110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:81704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.954 [2024-07-22 18:36:24.775125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.954 [2024-07-22 18:36:24.775148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:97448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.954 [2024-07-22 18:36:24.775163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.954 [2024-07-22 18:36:24.775183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:48936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.954 [2024-07-22 18:36:24.775198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.954 [2024-07-22 18:36:24.775234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:107864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.954 [2024-07-22 18:36:24.775252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.954 [2024-07-22 18:36:24.775272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:33784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.954 [2024-07-22 18:36:24.775287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.954 [2024-07-22 18:36:24.775309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:55488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.954 [2024-07-22 18:36:24.775324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.954 [2024-07-22 18:36:24.775344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:83936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.954 [2024-07-22 18:36:24.775360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.954 [2024-07-22 18:36:24.775380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.954 [2024-07-22 18:36:24.775395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.954 [2024-07-22 18:36:24.775417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:35728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.954 [2024-07-22 18:36:24.775432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.954 [2024-07-22 18:36:24.775453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:41728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.954 [2024-07-22 18:36:24.775468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.954 [2024-07-22 18:36:24.775487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:58976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.954 [2024-07-22 18:36:24.775503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.954 [2024-07-22 18:36:24.775523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:26872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.954 [2024-07-22 18:36:24.775538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.954 [2024-07-22 18:36:24.775558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:102224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.954 [2024-07-22 18:36:24.775573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.954 [2024-07-22 18:36:24.775593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:10456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.954 [2024-07-22 18:36:24.775608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.954 [2024-07-22 18:36:24.775627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:81128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.954 [2024-07-22 18:36:24.775642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.954 [2024-07-22 18:36:24.775664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:28656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.954 [2024-07-22 18:36:24.775680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.954 [2024-07-22 18:36:24.775716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:24128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.954 [2024-07-22 18:36:24.775731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.954 [2024-07-22 18:36:24.775751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:82208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.954 [2024-07-22 18:36:24.775766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.954 [2024-07-22 18:36:24.775786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:84792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.954 [2024-07-22 18:36:24.775801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.954 [2024-07-22 18:36:24.775820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:82272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.954 [2024-07-22 18:36:24.775836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.954 [2024-07-22 18:36:24.775856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:52472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.954 [2024-07-22 18:36:24.775871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.954 [2024-07-22 18:36:24.775900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:7240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.954 [2024-07-22 18:36:24.775916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.954 [2024-07-22 18:36:24.775936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:40600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.954 [2024-07-22 18:36:24.775951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.954 [2024-07-22 18:36:24.775971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:122992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.954 [2024-07-22 18:36:24.775986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.954 [2024-07-22 18:36:24.776008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:91496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.954 [2024-07-22 18:36:24.776035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.954 [2024-07-22 18:36:24.776065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.954 [2024-07-22 18:36:24.776079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.954 [2024-07-22 18:36:24.776099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:25024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.954 [2024-07-22 18:36:24.776114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.954 [2024-07-22 18:36:24.776144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:65984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.954 [2024-07-22 18:36:24.776159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.954 [2024-07-22 18:36:24.776180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:14120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.954 [2024-07-22 18:36:24.776197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.954 [2024-07-22 18:36:24.776232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:105520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.954 [2024-07-22 18:36:24.776248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.954 [2024-07-22 18:36:24.776268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.954 [2024-07-22 18:36:24.776283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.954 [2024-07-22 18:36:24.776302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:97136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.954 [2024-07-22 18:36:24.776317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.954 [2024-07-22 18:36:24.776340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:124720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.954 [2024-07-22 18:36:24.776355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.954 [2024-07-22 18:36:24.776374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:127288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.954 [2024-07-22 18:36:24.776389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.954 [2024-07-22 18:36:24.776412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:54824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.954 [2024-07-22 18:36:24.776427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.955 [2024-07-22 18:36:24.776446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:128296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.955 [2024-07-22 18:36:24.776462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.955 [2024-07-22 18:36:24.776481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:40952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.955 [2024-07-22 18:36:24.776496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.955 [2024-07-22 18:36:24.776537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:68000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.955 [2024-07-22 18:36:24.776553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.955 [2024-07-22 18:36:24.776582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.955 [2024-07-22 18:36:24.776612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.955 [2024-07-22 18:36:24.776632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:104464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.955 [2024-07-22 18:36:24.776648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.955 [2024-07-22 18:36:24.776673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:41368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.955 [2024-07-22 18:36:24.776692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.955 [2024-07-22 18:36:24.776736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:42336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.955 [2024-07-22 18:36:24.776762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.955 [2024-07-22 18:36:24.776795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:68744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.955 [2024-07-22 18:36:24.776816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.955 [2024-07-22 18:36:24.776836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:65216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.955 [2024-07-22 18:36:24.776851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.955 [2024-07-22 18:36:24.776870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:20880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.955 [2024-07-22 18:36:24.776885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.955 [2024-07-22 18:36:24.776914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:48592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.955 [2024-07-22 18:36:24.776942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.955 [2024-07-22 18:36:24.776973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:67792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.955 [2024-07-22 18:36:24.776990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.955 [2024-07-22 18:36:24.777009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:50536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.955 [2024-07-22 18:36:24.777024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.955 [2024-07-22 18:36:24.777046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.955 [2024-07-22 18:36:24.777067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.955 [2024-07-22 18:36:24.777094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:79320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.955 [2024-07-22 18:36:24.777109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.955 [2024-07-22 18:36:24.777129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:90368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.955 [2024-07-22 18:36:24.777143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.955 [2024-07-22 18:36:24.777169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:129440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.955 [2024-07-22 18:36:24.777185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.955 [2024-07-22 18:36:24.777229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:118272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.955 [2024-07-22 18:36:24.777268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.955 [2024-07-22 18:36:24.777303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:100824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.955 [2024-07-22 18:36:24.777319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.955 [2024-07-22 18:36:24.777340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:85600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.955 [2024-07-22 18:36:24.777361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.955 [2024-07-22 18:36:24.777383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:101016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.955 [2024-07-22 18:36:24.777398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.955 [2024-07-22 18:36:24.777422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.955 [2024-07-22 18:36:24.777448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.955 [2024-07-22 18:36:24.777474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:122800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.955 [2024-07-22 18:36:24.777489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.955 [2024-07-22 18:36:24.777510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:26552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.955 [2024-07-22 18:36:24.777530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.955 [2024-07-22 18:36:24.777552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.955 [2024-07-22 18:36:24.777577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.955 [2024-07-22 18:36:24.777601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:110448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.955 [2024-07-22 18:36:24.777617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.955 [2024-07-22 18:36:24.777636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:120528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.955 [2024-07-22 18:36:24.777656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.955 [2024-07-22 18:36:24.777677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:110912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.955 [2024-07-22 18:36:24.777697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.955 [2024-07-22 18:36:24.777753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:26320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.955 [2024-07-22 18:36:24.777780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.955 [2024-07-22 18:36:24.777806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.955 [2024-07-22 18:36:24.777822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.955 [2024-07-22 18:36:24.777841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:79896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.955 [2024-07-22 18:36:24.777857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.956 [2024-07-22 18:36:24.777887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:44224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.956 [2024-07-22 18:36:24.777917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.956 [2024-07-22 18:36:24.777945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:92112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.956 [2024-07-22 18:36:24.777962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.956 [2024-07-22 18:36:24.777983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:43016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.956 [2024-07-22 18:36:24.777998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.956 [2024-07-22 18:36:24.778034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:124384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.956 [2024-07-22 18:36:24.778057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.956 [2024-07-22 18:36:24.778079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:43392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.956 [2024-07-22 18:36:24.778094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.956 [2024-07-22 18:36:24.778114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:119072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.956 [2024-07-22 18:36:24.778134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.956 [2024-07-22 18:36:24.778165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:42176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.956 [2024-07-22 18:36:24.778189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.956 [2024-07-22 18:36:24.778228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:117792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.956 [2024-07-22 18:36:24.778255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.956 [2024-07-22 18:36:24.778288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:54344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.956 [2024-07-22 18:36:24.778309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.956 [2024-07-22 18:36:24.778330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:35168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.956 [2024-07-22 18:36:24.778345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.956 [2024-07-22 18:36:24.778383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:49640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.956 [2024-07-22 18:36:24.778402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.956 [2024-07-22 18:36:24.778424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:36896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.956 [2024-07-22 18:36:24.778447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.956 [2024-07-22 18:36:24.778470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:42832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.956 [2024-07-22 18:36:24.778487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.956 [2024-07-22 18:36:24.778516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.956 [2024-07-22 18:36:24.778542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.956 [2024-07-22 18:36:24.778569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.956 [2024-07-22 18:36:24.778585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.956 [2024-07-22 18:36:24.778606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:123112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.956 [2024-07-22 18:36:24.778628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.956 [2024-07-22 18:36:24.778657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:80480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.956 [2024-07-22 18:36:24.778673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.956 [2024-07-22 18:36:24.778693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:112824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.956 [2024-07-22 18:36:24.778711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.956 [2024-07-22 18:36:24.778744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:57120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.956 [2024-07-22 18:36:24.778762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.956 [2024-07-22 18:36:24.778787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:124608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.956 [2024-07-22 18:36:24.778802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.956 [2024-07-22 18:36:24.778826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.956 [2024-07-22 18:36:24.778852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.956 [2024-07-22 18:36:24.778876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:1184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.956 [2024-07-22 18:36:24.778891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.956 [2024-07-22 18:36:24.778923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:93624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.956 [2024-07-22 18:36:24.778954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.956 [2024-07-22 18:36:24.778984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.956 [2024-07-22 18:36:24.779002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.956 [2024-07-22 18:36:24.779022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:29496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.956 [2024-07-22 18:36:24.779037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.956 [2024-07-22 18:36:24.779057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.956 [2024-07-22 18:36:24.779074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.956 [2024-07-22 18:36:24.779101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:80352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.956 [2024-07-22 18:36:24.779118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.956 [2024-07-22 18:36:24.779161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:53968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.956 [2024-07-22 18:36:24.779185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.956 [2024-07-22 18:36:24.779221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.956 [2024-07-22 18:36:24.779240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.956 [2024-07-22 18:36:24.779264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:21512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.956 [2024-07-22 18:36:24.779287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.956 [2024-07-22 18:36:24.779318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:57472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.956 [2024-07-22 18:36:24.779341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.956 [2024-07-22 18:36:24.779372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:88400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.956 [2024-07-22 18:36:24.779413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.956 [2024-07-22 18:36:24.779435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.956 [2024-07-22 18:36:24.779450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.956 [2024-07-22 18:36:24.779469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:91472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.956 [2024-07-22 18:36:24.779488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.956 [2024-07-22 18:36:24.779517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:33848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.957 [2024-07-22 18:36:24.779534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.957 [2024-07-22 18:36:24.779587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:103792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.957 [2024-07-22 18:36:24.779605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.957 [2024-07-22 18:36:24.779625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.957 [2024-07-22 18:36:24.779640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.957 [2024-07-22 18:36:24.779661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:48656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.957 [2024-07-22 18:36:24.779686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.957 [2024-07-22 18:36:24.779730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:31992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.957 [2024-07-22 18:36:24.779746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.957 [2024-07-22 18:36:24.779767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:106280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.957 [2024-07-22 18:36:24.779787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.957 [2024-07-22 18:36:24.779809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:42384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.957 [2024-07-22 18:36:24.779824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.957 [2024-07-22 18:36:24.779844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:53464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.957 [2024-07-22 18:36:24.779859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.957 [2024-07-22 18:36:24.779878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.957 [2024-07-22 18:36:24.779893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.957 [2024-07-22 18:36:24.779912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:59344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.957 [2024-07-22 18:36:24.779927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.957 [2024-07-22 18:36:24.779947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.957 [2024-07-22 18:36:24.779962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.957 [2024-07-22 18:36:24.779982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.957 [2024-07-22 18:36:24.779997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.957 [2024-07-22 18:36:24.780029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:47432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.957 [2024-07-22 18:36:24.780044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.957 [2024-07-22 18:36:24.780064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:100944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.957 [2024-07-22 18:36:24.780079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.957 [2024-07-22 18:36:24.780099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:65368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.957 [2024-07-22 18:36:24.780113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.957 [2024-07-22 18:36:24.780133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:26280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.957 [2024-07-22 18:36:24.780148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.957 [2024-07-22 18:36:24.780168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:108632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.957 [2024-07-22 18:36:24.780182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.957 [2024-07-22 18:36:24.780220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:72448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.957 [2024-07-22 18:36:24.780238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.957 [2024-07-22 18:36:24.780259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:29632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.957 [2024-07-22 18:36:24.780274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.957 [2024-07-22 18:36:24.780293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.957 [2024-07-22 18:36:24.780308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.957 [2024-07-22 18:36:24.780331] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(5) to be set 00:29:12.957 [2024-07-22 18:36:24.780351] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:12.957 [2024-07-22 18:36:24.780368] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:12.957 [2024-07-22 18:36:24.780387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9088 len:8 PRP1 0x0 PRP2 0x0 00:29:12.957 [2024-07-22 18:36:24.780405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.957 [2024-07-22 18:36:24.780679] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b000 was disconnected and freed. reset controller. 00:29:12.957 [2024-07-22 18:36:24.781015] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.957 [2024-07-22 18:36:24.781067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:29:12.957 [2024-07-22 18:36:24.781227] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.957 [2024-07-22 18:36:24.781264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:29:12.957 [2024-07-22 18:36:24.781283] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:29:12.957 [2024-07-22 18:36:24.781315] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:29:12.957 [2024-07-22 18:36:24.781341] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:12.957 [2024-07-22 18:36:24.781363] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:12.957 [2024-07-22 18:36:24.781379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.957 [2024-07-22 18:36:24.781419] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:12.957 [2024-07-22 18:36:24.792886] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.957 18:36:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 89721 00:29:14.852 [2024-07-22 18:36:26.793243] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.852 [2024-07-22 18:36:26.793348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:29:14.852 [2024-07-22 18:36:26.793376] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:29:14.852 [2024-07-22 18:36:26.793427] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:29:14.852 [2024-07-22 18:36:26.793461] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.852 [2024-07-22 18:36:26.793481] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.852 [2024-07-22 18:36:26.793500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.852 [2024-07-22 18:36:26.793553] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.852 [2024-07-22 18:36:26.793572] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.378 [2024-07-22 18:36:28.793818] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.378 [2024-07-22 18:36:28.793924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.2, port=4420 00:29:17.378 [2024-07-22 18:36:28.793949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(5) to be set 00:29:17.378 [2024-07-22 18:36:28.793998] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:29:17.378 [2024-07-22 18:36:28.794029] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.378 [2024-07-22 18:36:28.794048] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.378 [2024-07-22 18:36:28.794065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.378 [2024-07-22 18:36:28.794111] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.378 [2024-07-22 18:36:28.794129] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.289 [2024-07-22 18:36:30.794254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.289 [2024-07-22 18:36:30.794333] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.289 [2024-07-22 18:36:30.794359] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.289 [2024-07-22 18:36:30.794376] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:29:19.289 [2024-07-22 18:36:30.794426] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.856 00:29:19.856 Latency(us) 00:29:19.856 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:19.856 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:29:19.856 NVMe0n1 : 8.17 1662.42 6.49 15.66 0.00 76369.20 10187.87 7046430.72 00:29:19.856 =================================================================================================================== 00:29:19.856 Total : 1662.42 6.49 15.66 0.00 76369.20 10187.87 7046430.72 00:29:19.856 0 00:29:19.856 18:36:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:19.856 Attaching 5 probes... 00:29:19.856 1306.964904: reset bdev controller NVMe0 00:29:19.856 1307.085399: reconnect bdev controller NVMe0 00:29:19.856 3318.939694: reconnect delay bdev controller NVMe0 00:29:19.856 3318.975381: reconnect bdev controller NVMe0 00:29:19.856 5319.591431: reconnect delay bdev controller NVMe0 00:29:19.856 5319.633678: reconnect bdev controller NVMe0 00:29:19.856 7320.144585: reconnect delay bdev controller NVMe0 00:29:19.856 7320.173960: reconnect bdev controller NVMe0 00:29:19.856 18:36:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:29:19.856 18:36:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:29:19.856 18:36:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 89678 00:29:19.856 18:36:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:19.856 18:36:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 89663 00:29:19.856 18:36:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 89663 ']' 00:29:19.856 18:36:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 89663 00:29:19.856 18:36:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:29:19.856 18:36:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:19.856 18:36:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89663 00:29:19.856 18:36:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:29:19.856 18:36:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:29:19.856 18:36:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89663' 00:29:19.856 killing process with pid 89663 00:29:19.856 18:36:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 89663 00:29:19.856 Received shutdown signal, test time was about 8.231529 seconds 00:29:19.856 00:29:19.856 Latency(us) 00:29:19.856 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:19.856 =================================================================================================================== 00:29:19.856 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:19.856 18:36:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 89663 00:29:21.232 18:36:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:21.491 18:36:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:29:21.491 18:36:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:29:21.491 18:36:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:21.491 18:36:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:29:21.491 18:36:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:21.491 18:36:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:29:21.491 18:36:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:21.491 18:36:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:21.491 rmmod nvme_tcp 00:29:21.491 rmmod nvme_fabrics 00:29:21.491 rmmod nvme_keyring 00:29:21.491 18:36:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:21.491 18:36:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:29:21.491 18:36:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:29:21.491 18:36:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 89221 ']' 00:29:21.491 18:36:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 89221 00:29:21.491 18:36:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 89221 ']' 00:29:21.491 18:36:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 89221 00:29:21.491 18:36:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:29:21.491 18:36:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:21.491 18:36:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89221 00:29:21.491 18:36:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:21.491 18:36:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:21.491 killing process with pid 89221 00:29:21.491 18:36:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89221' 00:29:21.491 18:36:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 89221 00:29:21.491 18:36:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 89221 00:29:22.864 18:36:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:22.864 18:36:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:22.864 18:36:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:22.864 18:36:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:22.864 18:36:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:22.864 18:36:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:22.864 18:36:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:22.864 18:36:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:22.864 18:36:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:29:22.864 00:29:22.864 real 0m51.684s 00:29:22.864 user 2m30.393s 00:29:22.864 sys 0m5.790s 00:29:22.864 18:36:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:22.864 18:36:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:22.864 ************************************ 00:29:22.864 END TEST nvmf_timeout 00:29:22.864 ************************************ 00:29:22.864 18:36:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:29:22.864 18:36:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:29:22.864 18:36:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:29:22.864 00:29:22.864 real 6m32.961s 00:29:22.864 user 18m6.486s 00:29:22.864 sys 1m19.546s 00:29:22.864 18:36:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:22.864 18:36:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.864 ************************************ 00:29:22.864 END TEST nvmf_host 00:29:22.864 ************************************ 00:29:23.124 18:36:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:23.124 00:29:23.124 real 16m40.185s 00:29:23.124 user 43m35.291s 00:29:23.124 sys 4m6.040s 00:29:23.124 18:36:34 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:23.124 ************************************ 00:29:23.124 END TEST nvmf_tcp 00:29:23.124 18:36:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:23.124 ************************************ 00:29:23.124 18:36:34 -- common/autotest_common.sh@1142 -- # return 0 00:29:23.124 18:36:34 -- spdk/autotest.sh@288 -- # [[ 1 -eq 0 ]] 00:29:23.124 18:36:34 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:29:23.124 18:36:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:23.124 18:36:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:23.124 18:36:34 -- common/autotest_common.sh@10 -- # set +x 00:29:23.124 ************************************ 00:29:23.124 START TEST nvmf_dif 00:29:23.124 ************************************ 00:29:23.124 18:36:34 nvmf_dif -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:29:23.124 * Looking for test storage... 00:29:23.124 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:29:23.124 18:36:35 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:23.124 18:36:35 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:29:23.124 18:36:35 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:23.124 18:36:35 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:23.124 18:36:35 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:23.124 18:36:35 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:23.124 18:36:35 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:23.124 18:36:35 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:23.124 18:36:35 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:23.124 18:36:35 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:23.124 18:36:35 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:23.124 18:36:35 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:23.124 18:36:35 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:29:23.124 18:36:35 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=1e224894-a0fc-4112-b81b-a37606f50c96 00:29:23.124 18:36:35 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:23.124 18:36:35 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:23.124 18:36:35 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:23.124 18:36:35 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:23.124 18:36:35 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:23.124 18:36:35 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:23.124 18:36:35 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:23.124 18:36:35 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:23.124 18:36:35 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.124 18:36:35 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.124 18:36:35 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.124 18:36:35 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:29:23.124 18:36:35 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.124 18:36:35 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:29:23.124 18:36:35 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:23.124 18:36:35 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:23.124 18:36:35 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:23.124 18:36:35 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:23.124 18:36:35 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:23.124 18:36:35 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:23.124 18:36:35 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:23.124 18:36:35 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:23.124 18:36:35 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:29:23.124 18:36:35 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:29:23.124 18:36:35 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:29:23.124 18:36:35 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:29:23.124 18:36:35 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:29:23.124 18:36:35 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:23.124 18:36:35 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:23.124 18:36:35 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:23.124 18:36:35 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:23.124 18:36:35 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:23.124 18:36:35 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:23.124 18:36:35 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:23.124 18:36:35 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:23.124 18:36:35 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:29:23.124 18:36:35 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:29:23.124 18:36:35 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:29:23.124 18:36:35 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:29:23.124 18:36:35 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:29:23.124 18:36:35 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:29:23.124 18:36:35 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:23.124 18:36:35 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:23.124 18:36:35 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:29:23.124 18:36:35 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:29:23.124 18:36:35 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:23.124 18:36:35 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:23.124 18:36:35 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:23.124 18:36:35 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:23.124 18:36:35 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:23.124 18:36:35 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:23.124 18:36:35 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:23.124 18:36:35 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:23.124 18:36:35 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:29:23.124 18:36:35 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:29:23.124 Cannot find device "nvmf_tgt_br" 00:29:23.124 18:36:35 nvmf_dif -- nvmf/common.sh@155 -- # true 00:29:23.124 18:36:35 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:29:23.124 Cannot find device "nvmf_tgt_br2" 00:29:23.124 18:36:35 nvmf_dif -- nvmf/common.sh@156 -- # true 00:29:23.124 18:36:35 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:29:23.124 18:36:35 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:29:23.124 Cannot find device "nvmf_tgt_br" 00:29:23.124 18:36:35 nvmf_dif -- nvmf/common.sh@158 -- # true 00:29:23.124 18:36:35 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:29:23.124 Cannot find device "nvmf_tgt_br2" 00:29:23.124 18:36:35 nvmf_dif -- nvmf/common.sh@159 -- # true 00:29:23.124 18:36:35 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:29:23.383 18:36:35 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:29:23.383 18:36:35 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:23.383 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:23.383 18:36:35 nvmf_dif -- nvmf/common.sh@162 -- # true 00:29:23.383 18:36:35 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:23.383 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:23.383 18:36:35 nvmf_dif -- nvmf/common.sh@163 -- # true 00:29:23.383 18:36:35 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:29:23.383 18:36:35 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:23.383 18:36:35 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:23.383 18:36:35 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:23.383 18:36:35 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:23.383 18:36:35 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:23.383 18:36:35 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:23.383 18:36:35 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:29:23.383 18:36:35 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:29:23.383 18:36:35 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:29:23.383 18:36:35 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:29:23.383 18:36:35 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:29:23.383 18:36:35 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:29:23.383 18:36:35 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:23.383 18:36:35 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:23.383 18:36:35 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:23.383 18:36:35 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:29:23.383 18:36:35 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:29:23.383 18:36:35 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:29:23.383 18:36:35 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:23.383 18:36:35 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:23.383 18:36:35 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:23.383 18:36:35 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:23.383 18:36:35 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:29:23.383 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:23.383 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:29:23.383 00:29:23.383 --- 10.0.0.2 ping statistics --- 00:29:23.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:23.383 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:29:23.383 18:36:35 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:29:23.383 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:23.383 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:29:23.383 00:29:23.383 --- 10.0.0.3 ping statistics --- 00:29:23.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:23.383 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:29:23.383 18:36:35 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:23.383 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:23.383 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:29:23.383 00:29:23.383 --- 10.0.0.1 ping statistics --- 00:29:23.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:23.383 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:29:23.383 18:36:35 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:23.383 18:36:35 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:29:23.383 18:36:35 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:29:23.383 18:36:35 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:23.949 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:23.949 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:23.949 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:23.949 18:36:35 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:23.949 18:36:35 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:23.949 18:36:35 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:23.949 18:36:35 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:23.949 18:36:35 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:23.949 18:36:35 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:23.949 18:36:35 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:29:23.949 18:36:35 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:29:23.949 18:36:35 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:23.949 18:36:35 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:23.949 18:36:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:23.949 18:36:35 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=90174 00:29:23.949 18:36:35 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:29:23.949 18:36:35 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 90174 00:29:23.949 18:36:35 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 90174 ']' 00:29:23.949 18:36:35 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:23.949 18:36:35 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:23.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:23.949 18:36:35 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:23.949 18:36:35 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:23.949 18:36:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:23.949 [2024-07-22 18:36:35.939510] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:29:23.949 [2024-07-22 18:36:35.939694] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:24.207 [2024-07-22 18:36:36.122589] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.466 [2024-07-22 18:36:36.423753] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:24.466 [2024-07-22 18:36:36.423827] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:24.466 [2024-07-22 18:36:36.423844] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:24.466 [2024-07-22 18:36:36.423859] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:24.466 [2024-07-22 18:36:36.423870] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:24.466 [2024-07-22 18:36:36.423921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:24.725 [2024-07-22 18:36:36.634767] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:29:24.984 18:36:36 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:24.984 18:36:36 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:29:24.984 18:36:36 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:24.984 18:36:36 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:24.984 18:36:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:24.984 18:36:36 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:24.984 18:36:36 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:29:24.984 18:36:36 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:29:24.984 18:36:36 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.984 18:36:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:24.984 [2024-07-22 18:36:36.894887] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:24.984 18:36:36 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.984 18:36:36 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:29:24.984 18:36:36 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:24.984 18:36:36 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:24.984 18:36:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:24.984 ************************************ 00:29:24.984 START TEST fio_dif_1_default 00:29:24.984 ************************************ 00:29:24.984 18:36:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:29:24.984 18:36:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:29:24.984 18:36:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:29:24.984 18:36:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:29:24.984 18:36:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:29:24.984 18:36:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:29:24.984 18:36:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:24.984 18:36:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.984 18:36:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:24.984 bdev_null0 00:29:24.984 18:36:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.984 18:36:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:24.984 18:36:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.984 18:36:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:24.984 18:36:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.984 18:36:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:24.984 18:36:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.984 18:36:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:24.984 18:36:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.984 18:36:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:24.984 18:36:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.984 18:36:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:24.984 [2024-07-22 18:36:36.939088] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:24.984 18:36:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.984 18:36:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:29:24.984 18:36:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:29:24.984 18:36:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:24.984 18:36:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:29:24.984 18:36:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:29:24.984 18:36:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:24.985 18:36:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:24.985 18:36:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:24.985 { 00:29:24.985 "params": { 00:29:24.985 "name": "Nvme$subsystem", 00:29:24.985 "trtype": "$TEST_TRANSPORT", 00:29:24.985 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:24.985 "adrfam": "ipv4", 00:29:24.985 "trsvcid": "$NVMF_PORT", 00:29:24.985 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:24.985 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:24.985 "hdgst": ${hdgst:-false}, 00:29:24.985 "ddgst": ${ddgst:-false} 00:29:24.985 }, 00:29:24.985 "method": "bdev_nvme_attach_controller" 00:29:24.985 } 00:29:24.985 EOF 00:29:24.985 )") 00:29:24.985 18:36:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:24.985 18:36:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:29:24.985 18:36:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:24.985 18:36:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:29:24.985 18:36:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:29:24.985 18:36:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:24.985 18:36:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:24.985 18:36:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:24.985 18:36:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:29:24.985 18:36:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:24.985 18:36:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:29:24.985 18:36:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:24.985 18:36:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:24.985 18:36:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:29:24.985 18:36:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:29:24.985 18:36:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:29:24.985 18:36:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:24.985 18:36:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:29:24.985 18:36:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:29:24.985 18:36:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:24.985 "params": { 00:29:24.985 "name": "Nvme0", 00:29:24.985 "trtype": "tcp", 00:29:24.985 "traddr": "10.0.0.2", 00:29:24.985 "adrfam": "ipv4", 00:29:24.985 "trsvcid": "4420", 00:29:24.985 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:24.985 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:24.985 "hdgst": false, 00:29:24.985 "ddgst": false 00:29:24.985 }, 00:29:24.985 "method": "bdev_nvme_attach_controller" 00:29:24.985 }' 00:29:24.985 18:36:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:29:24.985 18:36:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:29:24.985 18:36:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # break 00:29:24.985 18:36:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:24.985 18:36:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:25.243 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:25.243 fio-3.35 00:29:25.243 Starting 1 thread 00:29:37.448 00:29:37.448 filename0: (groupid=0, jobs=1): err= 0: pid=90237: Mon Jul 22 18:36:48 2024 00:29:37.448 read: IOPS=6484, BW=25.3MiB/s (26.6MB/s)(253MiB/10001msec) 00:29:37.448 slat (usec): min=6, max=144, avg=12.11, stdev= 4.66 00:29:37.448 clat (usec): min=425, max=2524, avg=580.19, stdev=55.26 00:29:37.448 lat (usec): min=433, max=2541, avg=592.30, stdev=56.50 00:29:37.448 clat percentiles (usec): 00:29:37.448 | 1.00th=[ 498], 5.00th=[ 523], 10.00th=[ 529], 20.00th=[ 545], 00:29:37.448 | 30.00th=[ 553], 40.00th=[ 562], 50.00th=[ 570], 60.00th=[ 578], 00:29:37.448 | 70.00th=[ 594], 80.00th=[ 611], 90.00th=[ 635], 95.00th=[ 676], 00:29:37.448 | 99.00th=[ 750], 99.50th=[ 816], 99.90th=[ 889], 99.95th=[ 1123], 00:29:37.448 | 99.99th=[ 2409] 00:29:37.448 bw ( KiB/s): min=23872, max=27456, per=100.00%, avg=26140.63, stdev=872.05, samples=19 00:29:37.448 iops : min= 5968, max= 6864, avg=6535.16, stdev=218.01, samples=19 00:29:37.448 lat (usec) : 500=1.02%, 750=97.95%, 1000=0.97% 00:29:37.448 lat (msec) : 2=0.05%, 4=0.01% 00:29:37.448 cpu : usr=85.74%, sys=12.40%, ctx=23, majf=0, minf=1074 00:29:37.448 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:37.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:37.448 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:37.448 issued rwts: total=64856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:37.448 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:37.448 00:29:37.448 Run status group 0 (all jobs): 00:29:37.448 READ: bw=25.3MiB/s (26.6MB/s), 25.3MiB/s-25.3MiB/s (26.6MB/s-26.6MB/s), io=253MiB (266MB), run=10001-10001msec 00:29:37.448 ----------------------------------------------------- 00:29:37.448 Suppressions used: 00:29:37.448 count bytes template 00:29:37.448 1 8 /usr/src/fio/parse.c 00:29:37.448 1 8 libtcmalloc_minimal.so 00:29:37.448 1 904 libcrypto.so 00:29:37.448 ----------------------------------------------------- 00:29:37.448 00:29:37.448 18:36:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:29:37.448 18:36:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:29:37.448 18:36:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:29:37.448 18:36:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:37.448 18:36:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:29:37.448 18:36:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:37.448 18:36:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.448 18:36:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:37.448 18:36:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.448 18:36:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:37.448 18:36:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.448 18:36:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.707 00:29:37.707 real 0m12.556s 00:29:37.707 user 0m10.640s 00:29:37.707 sys 0m1.649s 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:37.707 ************************************ 00:29:37.707 END TEST fio_dif_1_default 00:29:37.707 ************************************ 00:29:37.707 18:36:49 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:29:37.707 18:36:49 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:29:37.707 18:36:49 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:37.707 18:36:49 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:37.707 18:36:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:37.707 ************************************ 00:29:37.707 START TEST fio_dif_1_multi_subsystems 00:29:37.707 ************************************ 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:37.707 bdev_null0 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:37.707 [2024-07-22 18:36:49.537684] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:37.707 bdev_null1 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:37.707 { 00:29:37.707 "params": { 00:29:37.707 "name": "Nvme$subsystem", 00:29:37.707 "trtype": "$TEST_TRANSPORT", 00:29:37.707 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:37.707 "adrfam": "ipv4", 00:29:37.707 "trsvcid": "$NVMF_PORT", 00:29:37.707 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:37.707 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:37.707 "hdgst": ${hdgst:-false}, 00:29:37.707 "ddgst": ${ddgst:-false} 00:29:37.707 }, 00:29:37.707 "method": "bdev_nvme_attach_controller" 00:29:37.707 } 00:29:37.707 EOF 00:29:37.707 )") 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:37.707 { 00:29:37.707 "params": { 00:29:37.707 "name": "Nvme$subsystem", 00:29:37.707 "trtype": "$TEST_TRANSPORT", 00:29:37.707 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:37.707 "adrfam": "ipv4", 00:29:37.707 "trsvcid": "$NVMF_PORT", 00:29:37.707 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:37.707 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:37.707 "hdgst": ${hdgst:-false}, 00:29:37.707 "ddgst": ${ddgst:-false} 00:29:37.707 }, 00:29:37.707 "method": "bdev_nvme_attach_controller" 00:29:37.707 } 00:29:37.707 EOF 00:29:37.707 )") 00:29:37.707 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:29:37.708 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:29:37.708 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:29:37.708 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:29:37.708 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:29:37.708 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:37.708 "params": { 00:29:37.708 "name": "Nvme0", 00:29:37.708 "trtype": "tcp", 00:29:37.708 "traddr": "10.0.0.2", 00:29:37.708 "adrfam": "ipv4", 00:29:37.708 "trsvcid": "4420", 00:29:37.708 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:37.708 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:37.708 "hdgst": false, 00:29:37.708 "ddgst": false 00:29:37.708 }, 00:29:37.708 "method": "bdev_nvme_attach_controller" 00:29:37.708 },{ 00:29:37.708 "params": { 00:29:37.708 "name": "Nvme1", 00:29:37.708 "trtype": "tcp", 00:29:37.708 "traddr": "10.0.0.2", 00:29:37.708 "adrfam": "ipv4", 00:29:37.708 "trsvcid": "4420", 00:29:37.708 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:37.708 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:37.708 "hdgst": false, 00:29:37.708 "ddgst": false 00:29:37.708 }, 00:29:37.708 "method": "bdev_nvme_attach_controller" 00:29:37.708 }' 00:29:37.708 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:29:37.708 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:29:37.708 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # break 00:29:37.708 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:37.708 18:36:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:37.966 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:37.966 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:37.966 fio-3.35 00:29:37.966 Starting 2 threads 00:29:50.161 00:29:50.161 filename0: (groupid=0, jobs=1): err= 0: pid=90396: Mon Jul 22 18:37:00 2024 00:29:50.161 read: IOPS=3544, BW=13.8MiB/s (14.5MB/s)(138MiB/10001msec) 00:29:50.161 slat (usec): min=6, max=130, avg=18.22, stdev= 5.45 00:29:50.161 clat (usec): min=569, max=1991, avg=1077.66, stdev=95.64 00:29:50.161 lat (usec): min=579, max=2023, avg=1095.88, stdev=97.53 00:29:50.161 clat percentiles (usec): 00:29:50.161 | 1.00th=[ 914], 5.00th=[ 955], 10.00th=[ 988], 20.00th=[ 1012], 00:29:50.161 | 30.00th=[ 1029], 40.00th=[ 1045], 50.00th=[ 1057], 60.00th=[ 1074], 00:29:50.161 | 70.00th=[ 1106], 80.00th=[ 1139], 90.00th=[ 1205], 95.00th=[ 1254], 00:29:50.161 | 99.00th=[ 1385], 99.50th=[ 1467], 99.90th=[ 1631], 99.95th=[ 1729], 00:29:50.161 | 99.99th=[ 1975] 00:29:50.161 bw ( KiB/s): min=12704, max=14944, per=49.88%, avg=14141.74, stdev=730.02, samples=19 00:29:50.162 iops : min= 3176, max= 3736, avg=3535.37, stdev=182.50, samples=19 00:29:50.162 lat (usec) : 750=0.02%, 1000=14.56% 00:29:50.162 lat (msec) : 2=85.42% 00:29:50.162 cpu : usr=91.61%, sys=6.97%, ctx=10, majf=0, minf=1075 00:29:50.162 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:50.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:50.162 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:50.162 issued rwts: total=35444,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:50.162 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:50.162 filename1: (groupid=0, jobs=1): err= 0: pid=90397: Mon Jul 22 18:37:00 2024 00:29:50.162 read: IOPS=3543, BW=13.8MiB/s (14.5MB/s)(138MiB/10001msec) 00:29:50.162 slat (usec): min=8, max=175, avg=18.64, stdev= 6.03 00:29:50.162 clat (usec): min=578, max=2803, avg=1076.11, stdev=88.60 00:29:50.162 lat (usec): min=588, max=2829, avg=1094.74, stdev=90.50 00:29:50.162 clat percentiles (usec): 00:29:50.162 | 1.00th=[ 963], 5.00th=[ 988], 10.00th=[ 996], 20.00th=[ 1012], 00:29:50.162 | 30.00th=[ 1029], 40.00th=[ 1037], 50.00th=[ 1057], 60.00th=[ 1074], 00:29:50.162 | 70.00th=[ 1090], 80.00th=[ 1139], 90.00th=[ 1205], 95.00th=[ 1237], 00:29:50.162 | 99.00th=[ 1385], 99.50th=[ 1467], 99.90th=[ 1631], 99.95th=[ 1729], 00:29:50.162 | 99.99th=[ 2769] 00:29:50.162 bw ( KiB/s): min=12697, max=14944, per=49.87%, avg=14139.68, stdev=735.55, samples=19 00:29:50.162 iops : min= 3174, max= 3736, avg=3534.84, stdev=183.91, samples=19 00:29:50.162 lat (usec) : 750=0.01%, 1000=10.83% 00:29:50.162 lat (msec) : 2=89.14%, 4=0.02% 00:29:50.162 cpu : usr=91.53%, sys=6.96%, ctx=78, majf=0, minf=1075 00:29:50.162 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:50.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:50.162 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:50.162 issued rwts: total=35436,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:50.162 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:50.162 00:29:50.162 Run status group 0 (all jobs): 00:29:50.162 READ: bw=27.7MiB/s (29.0MB/s), 13.8MiB/s-13.8MiB/s (14.5MB/s-14.5MB/s), io=277MiB (290MB), run=10001-10001msec 00:29:50.162 ----------------------------------------------------- 00:29:50.162 Suppressions used: 00:29:50.162 count bytes template 00:29:50.162 2 16 /usr/src/fio/parse.c 00:29:50.162 1 8 libtcmalloc_minimal.so 00:29:50.162 1 904 libcrypto.so 00:29:50.162 ----------------------------------------------------- 00:29:50.162 00:29:50.162 18:37:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:29:50.162 18:37:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:29:50.162 18:37:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:29:50.162 18:37:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:50.162 18:37:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:29:50.162 18:37:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:50.162 18:37:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.162 18:37:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:50.162 18:37:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.162 18:37:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:50.162 18:37:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.162 18:37:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:50.162 18:37:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.162 18:37:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:29:50.162 18:37:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:50.162 18:37:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:29:50.162 18:37:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:50.162 18:37:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.162 18:37:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:50.162 18:37:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.162 18:37:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:50.162 18:37:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.162 18:37:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:50.162 18:37:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.162 00:29:50.162 real 0m12.638s 00:29:50.162 user 0m20.426s 00:29:50.162 sys 0m1.821s 00:29:50.162 18:37:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:50.162 18:37:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:50.162 ************************************ 00:29:50.162 END TEST fio_dif_1_multi_subsystems 00:29:50.162 ************************************ 00:29:50.421 18:37:02 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:29:50.421 18:37:02 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:29:50.421 18:37:02 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:50.421 18:37:02 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:50.421 18:37:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:50.421 ************************************ 00:29:50.421 START TEST fio_dif_rand_params 00:29:50.421 ************************************ 00:29:50.421 18:37:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:29:50.421 18:37:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:29:50.421 18:37:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:29:50.421 18:37:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:29:50.421 18:37:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:29:50.421 18:37:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:29:50.421 18:37:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:29:50.421 18:37:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:29:50.421 18:37:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:29:50.421 18:37:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:29:50.421 18:37:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:50.421 18:37:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:29:50.421 18:37:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:29:50.421 18:37:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:29:50.421 18:37:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.421 18:37:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:50.421 bdev_null0 00:29:50.421 18:37:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.421 18:37:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:50.421 18:37:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.421 18:37:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:50.421 18:37:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.421 18:37:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:50.421 18:37:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.421 18:37:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:50.421 18:37:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.421 18:37:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:50.421 18:37:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.421 18:37:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:50.421 [2024-07-22 18:37:02.225577] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:50.421 18:37:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.421 18:37:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:29:50.421 18:37:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:29:50.421 18:37:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:50.421 18:37:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:29:50.421 18:37:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:29:50.421 18:37:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:50.421 18:37:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:50.421 { 00:29:50.421 "params": { 00:29:50.421 "name": "Nvme$subsystem", 00:29:50.421 "trtype": "$TEST_TRANSPORT", 00:29:50.421 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:50.421 "adrfam": "ipv4", 00:29:50.421 "trsvcid": "$NVMF_PORT", 00:29:50.421 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:50.421 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:50.421 "hdgst": ${hdgst:-false}, 00:29:50.421 "ddgst": ${ddgst:-false} 00:29:50.421 }, 00:29:50.421 "method": "bdev_nvme_attach_controller" 00:29:50.421 } 00:29:50.421 EOF 00:29:50.421 )") 00:29:50.421 18:37:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:50.421 18:37:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:50.421 18:37:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:29:50.421 18:37:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:50.421 18:37:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:29:50.421 18:37:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:50.421 18:37:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:29:50.421 18:37:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:50.421 18:37:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:50.421 18:37:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:50.421 18:37:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:29:50.421 18:37:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:50.421 18:37:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:50.421 18:37:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:29:50.421 18:37:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:50.421 18:37:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:50.421 18:37:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:29:50.421 18:37:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:29:50.421 18:37:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:50.421 18:37:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:29:50.421 18:37:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:50.421 "params": { 00:29:50.421 "name": "Nvme0", 00:29:50.421 "trtype": "tcp", 00:29:50.421 "traddr": "10.0.0.2", 00:29:50.421 "adrfam": "ipv4", 00:29:50.421 "trsvcid": "4420", 00:29:50.421 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:50.421 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:50.421 "hdgst": false, 00:29:50.421 "ddgst": false 00:29:50.421 }, 00:29:50.421 "method": "bdev_nvme_attach_controller" 00:29:50.421 }' 00:29:50.421 18:37:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:29:50.421 18:37:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:29:50.421 18:37:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:29:50.421 18:37:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:50.421 18:37:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:50.679 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:29:50.679 ... 00:29:50.679 fio-3.35 00:29:50.679 Starting 3 threads 00:29:57.293 00:29:57.293 filename0: (groupid=0, jobs=1): err= 0: pid=90559: Mon Jul 22 18:37:08 2024 00:29:57.293 read: IOPS=211, BW=26.4MiB/s (27.7MB/s)(132MiB/5002msec) 00:29:57.293 slat (usec): min=7, max=136, avg=29.66, stdev=17.36 00:29:57.293 clat (usec): min=13442, max=18040, avg=14134.05, stdev=551.23 00:29:57.293 lat (usec): min=13458, max=18065, avg=14163.70, stdev=552.73 00:29:57.293 clat percentiles (usec): 00:29:57.293 | 1.00th=[13566], 5.00th=[13698], 10.00th=[13698], 20.00th=[13698], 00:29:57.293 | 30.00th=[13829], 40.00th=[13829], 50.00th=[13960], 60.00th=[14091], 00:29:57.293 | 70.00th=[14222], 80.00th=[14484], 90.00th=[14877], 95.00th=[15139], 00:29:57.293 | 99.00th=[16319], 99.50th=[17957], 99.90th=[17957], 99.95th=[17957], 00:29:57.293 | 99.99th=[17957] 00:29:57.293 bw ( KiB/s): min=25344, max=27648, per=33.39%, avg=27050.67, stdev=839.35, samples=9 00:29:57.293 iops : min= 198, max= 216, avg=211.33, stdev= 6.56, samples=9 00:29:57.293 lat (msec) : 20=100.00% 00:29:57.293 cpu : usr=92.06%, sys=7.16%, ctx=39, majf=0, minf=1074 00:29:57.293 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:57.293 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:57.293 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:57.293 issued rwts: total=1056,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:57.293 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:57.293 filename0: (groupid=0, jobs=1): err= 0: pid=90560: Mon Jul 22 18:37:08 2024 00:29:57.293 read: IOPS=210, BW=26.4MiB/s (27.7MB/s)(132MiB/5005msec) 00:29:57.293 slat (nsec): min=6966, max=81929, avg=27017.02, stdev=13855.82 00:29:57.293 clat (usec): min=13450, max=20236, avg=14147.62, stdev=606.31 00:29:57.293 lat (usec): min=13466, max=20281, avg=14174.63, stdev=607.76 00:29:57.293 clat percentiles (usec): 00:29:57.293 | 1.00th=[13566], 5.00th=[13698], 10.00th=[13698], 20.00th=[13829], 00:29:57.293 | 30.00th=[13829], 40.00th=[13960], 50.00th=[13960], 60.00th=[14091], 00:29:57.293 | 70.00th=[14222], 80.00th=[14484], 90.00th=[14877], 95.00th=[15139], 00:29:57.293 | 99.00th=[16188], 99.50th=[17957], 99.90th=[20317], 99.95th=[20317], 00:29:57.293 | 99.99th=[20317] 00:29:57.293 bw ( KiB/s): min=25344, max=28416, per=33.37%, avg=27033.60, stdev=944.08, samples=10 00:29:57.293 iops : min= 198, max= 222, avg=211.20, stdev= 7.38, samples=10 00:29:57.293 lat (msec) : 20=99.72%, 50=0.28% 00:29:57.293 cpu : usr=92.17%, sys=7.11%, ctx=587, majf=0, minf=1075 00:29:57.293 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:57.293 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:57.293 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:57.293 issued rwts: total=1056,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:57.293 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:57.293 filename0: (groupid=0, jobs=1): err= 0: pid=90561: Mon Jul 22 18:37:08 2024 00:29:57.293 read: IOPS=211, BW=26.4MiB/s (27.7MB/s)(132MiB/5001msec) 00:29:57.293 slat (nsec): min=6596, max=85242, avg=28686.89, stdev=14772.28 00:29:57.293 clat (usec): min=13557, max=18131, avg=14133.28, stdev=529.41 00:29:57.293 lat (usec): min=13578, max=18178, avg=14161.97, stdev=530.52 00:29:57.293 clat percentiles (usec): 00:29:57.293 | 1.00th=[13566], 5.00th=[13698], 10.00th=[13698], 20.00th=[13829], 00:29:57.293 | 30.00th=[13829], 40.00th=[13960], 50.00th=[13960], 60.00th=[14091], 00:29:57.293 | 70.00th=[14222], 80.00th=[14484], 90.00th=[14877], 95.00th=[15139], 00:29:57.293 | 99.00th=[16319], 99.50th=[16909], 99.90th=[18220], 99.95th=[18220], 00:29:57.293 | 99.99th=[18220] 00:29:57.293 bw ( KiB/s): min=25344, max=27648, per=33.39%, avg=27056.56, stdev=838.19, samples=9 00:29:57.293 iops : min= 198, max= 216, avg=211.33, stdev= 6.56, samples=9 00:29:57.293 lat (msec) : 20=100.00% 00:29:57.293 cpu : usr=92.22%, sys=7.14%, ctx=8, majf=0, minf=1072 00:29:57.293 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:57.293 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:57.293 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:57.293 issued rwts: total=1056,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:57.293 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:57.293 00:29:57.293 Run status group 0 (all jobs): 00:29:57.293 READ: bw=79.1MiB/s (83.0MB/s), 26.4MiB/s-26.4MiB/s (27.7MB/s-27.7MB/s), io=396MiB (415MB), run=5001-5005msec 00:29:57.859 ----------------------------------------------------- 00:29:57.859 Suppressions used: 00:29:57.859 count bytes template 00:29:57.859 5 44 /usr/src/fio/parse.c 00:29:57.859 1 8 libtcmalloc_minimal.so 00:29:57.859 1 904 libcrypto.so 00:29:57.859 ----------------------------------------------------- 00:29:57.859 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:57.859 bdev_null0 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:57.859 [2024-07-22 18:37:09.689620] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:57.859 bdev_null1 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:57.859 bdev_null2 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:57.859 18:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:57.860 18:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:57.860 18:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:57.860 18:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:57.860 18:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:57.860 18:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:29:57.860 18:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:29:57.860 18:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:29:57.860 18:37:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:29:57.860 18:37:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:29:57.860 18:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:57.860 18:37:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:57.860 18:37:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:57.860 { 00:29:57.860 "params": { 00:29:57.860 "name": "Nvme$subsystem", 00:29:57.860 "trtype": "$TEST_TRANSPORT", 00:29:57.860 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:57.860 "adrfam": "ipv4", 00:29:57.860 "trsvcid": "$NVMF_PORT", 00:29:57.860 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:57.860 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:57.860 "hdgst": ${hdgst:-false}, 00:29:57.860 "ddgst": ${ddgst:-false} 00:29:57.860 }, 00:29:57.860 "method": "bdev_nvme_attach_controller" 00:29:57.860 } 00:29:57.860 EOF 00:29:57.860 )") 00:29:57.860 18:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:57.860 18:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:29:57.860 18:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:57.860 18:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:29:57.860 18:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:57.860 18:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:29:57.860 18:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:57.860 18:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:57.860 18:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:29:57.860 18:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:57.860 18:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:57.860 18:37:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:57.860 18:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:29:57.860 18:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:57.860 18:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:57.860 18:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:29:57.860 18:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:29:57.860 18:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:57.860 18:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:29:57.860 18:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:57.860 18:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:29:57.860 18:37:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:57.860 18:37:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:57.860 { 00:29:57.860 "params": { 00:29:57.860 "name": "Nvme$subsystem", 00:29:57.860 "trtype": "$TEST_TRANSPORT", 00:29:57.860 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:57.860 "adrfam": "ipv4", 00:29:57.860 "trsvcid": "$NVMF_PORT", 00:29:57.860 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:57.860 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:57.860 "hdgst": ${hdgst:-false}, 00:29:57.860 "ddgst": ${ddgst:-false} 00:29:57.860 }, 00:29:57.860 "method": "bdev_nvme_attach_controller" 00:29:57.860 } 00:29:57.860 EOF 00:29:57.860 )") 00:29:57.860 18:37:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:57.860 18:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:29:57.860 18:37:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:57.860 18:37:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:57.860 18:37:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:57.860 { 00:29:57.860 "params": { 00:29:57.860 "name": "Nvme$subsystem", 00:29:57.860 "trtype": "$TEST_TRANSPORT", 00:29:57.860 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:57.860 "adrfam": "ipv4", 00:29:57.860 "trsvcid": "$NVMF_PORT", 00:29:57.860 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:57.860 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:57.860 "hdgst": ${hdgst:-false}, 00:29:57.860 "ddgst": ${ddgst:-false} 00:29:57.860 }, 00:29:57.860 "method": "bdev_nvme_attach_controller" 00:29:57.860 } 00:29:57.860 EOF 00:29:57.860 )") 00:29:57.860 18:37:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:57.860 18:37:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:29:57.860 18:37:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:29:57.860 18:37:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:57.860 "params": { 00:29:57.860 "name": "Nvme0", 00:29:57.860 "trtype": "tcp", 00:29:57.860 "traddr": "10.0.0.2", 00:29:57.860 "adrfam": "ipv4", 00:29:57.860 "trsvcid": "4420", 00:29:57.860 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:57.860 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:57.860 "hdgst": false, 00:29:57.860 "ddgst": false 00:29:57.860 }, 00:29:57.860 "method": "bdev_nvme_attach_controller" 00:29:57.860 },{ 00:29:57.860 "params": { 00:29:57.860 "name": "Nvme1", 00:29:57.860 "trtype": "tcp", 00:29:57.860 "traddr": "10.0.0.2", 00:29:57.860 "adrfam": "ipv4", 00:29:57.860 "trsvcid": "4420", 00:29:57.860 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:57.860 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:57.860 "hdgst": false, 00:29:57.860 "ddgst": false 00:29:57.860 }, 00:29:57.860 "method": "bdev_nvme_attach_controller" 00:29:57.860 },{ 00:29:57.860 "params": { 00:29:57.860 "name": "Nvme2", 00:29:57.860 "trtype": "tcp", 00:29:57.860 "traddr": "10.0.0.2", 00:29:57.860 "adrfam": "ipv4", 00:29:57.860 "trsvcid": "4420", 00:29:57.860 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:57.860 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:57.860 "hdgst": false, 00:29:57.860 "ddgst": false 00:29:57.860 }, 00:29:57.860 "method": "bdev_nvme_attach_controller" 00:29:57.860 }' 00:29:57.860 18:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:29:57.860 18:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:29:57.860 18:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:29:57.860 18:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:57.860 18:37:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:58.118 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:58.118 ... 00:29:58.118 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:58.118 ... 00:29:58.118 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:58.118 ... 00:29:58.118 fio-3.35 00:29:58.118 Starting 24 threads 00:30:10.314 00:30:10.314 filename0: (groupid=0, jobs=1): err= 0: pid=90655: Mon Jul 22 18:37:21 2024 00:30:10.314 read: IOPS=168, BW=674KiB/s (690kB/s)(6804KiB/10094msec) 00:30:10.314 slat (usec): min=5, max=8030, avg=23.92, stdev=194.56 00:30:10.314 clat (msec): min=5, max=207, avg=94.59, stdev=34.98 00:30:10.314 lat (msec): min=5, max=207, avg=94.62, stdev=34.98 00:30:10.314 clat percentiles (msec): 00:30:10.314 | 1.00th=[ 8], 5.00th=[ 28], 10.00th=[ 48], 20.00th=[ 72], 00:30:10.314 | 30.00th=[ 84], 40.00th=[ 93], 50.00th=[ 96], 60.00th=[ 103], 00:30:10.314 | 70.00th=[ 110], 80.00th=[ 121], 90.00th=[ 133], 95.00th=[ 146], 00:30:10.314 | 99.00th=[ 192], 99.50th=[ 205], 99.90th=[ 209], 99.95th=[ 209], 00:30:10.314 | 99.99th=[ 209] 00:30:10.314 bw ( KiB/s): min= 360, max= 1466, per=4.65%, avg=673.50, stdev=211.25, samples=20 00:30:10.314 iops : min= 90, max= 366, avg=168.30, stdev=52.72, samples=20 00:30:10.314 lat (msec) : 10=1.88%, 20=2.70%, 50=7.35%, 100=47.09%, 250=40.98% 00:30:10.314 cpu : usr=36.02%, sys=2.39%, ctx=1100, majf=0, minf=1075 00:30:10.314 IO depths : 1=0.2%, 2=0.7%, 4=2.1%, 8=80.8%, 16=16.2%, 32=0.0%, >=64=0.0% 00:30:10.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.314 complete : 0=0.0%, 4=88.0%, 8=11.6%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.314 issued rwts: total=1701,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.314 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:10.314 filename0: (groupid=0, jobs=1): err= 0: pid=90656: Mon Jul 22 18:37:21 2024 00:30:10.314 read: IOPS=138, BW=555KiB/s (568kB/s)(5568KiB/10038msec) 00:30:10.314 slat (usec): min=10, max=4061, avg=32.22, stdev=156.57 00:30:10.314 clat (msec): min=47, max=207, avg=114.91, stdev=25.53 00:30:10.314 lat (msec): min=47, max=207, avg=114.94, stdev=25.53 00:30:10.314 clat percentiles (msec): 00:30:10.314 | 1.00th=[ 48], 5.00th=[ 84], 10.00th=[ 90], 20.00th=[ 95], 00:30:10.314 | 30.00th=[ 99], 40.00th=[ 103], 50.00th=[ 109], 60.00th=[ 117], 00:30:10.314 | 70.00th=[ 127], 80.00th=[ 136], 90.00th=[ 146], 95.00th=[ 157], 00:30:10.314 | 99.00th=[ 205], 99.50th=[ 205], 99.90th=[ 209], 99.95th=[ 209], 00:30:10.314 | 99.99th=[ 209] 00:30:10.314 bw ( KiB/s): min= 368, max= 752, per=3.83%, avg=554.60, stdev=98.69, samples=20 00:30:10.314 iops : min= 92, max= 188, avg=138.40, stdev=24.76, samples=20 00:30:10.314 lat (msec) : 50=1.01%, 100=32.90%, 250=66.09% 00:30:10.314 cpu : usr=47.24%, sys=2.90%, ctx=1421, majf=0, minf=1074 00:30:10.314 IO depths : 1=0.1%, 2=6.4%, 4=25.0%, 8=56.1%, 16=12.4%, 32=0.0%, >=64=0.0% 00:30:10.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.314 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.314 issued rwts: total=1392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.314 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:10.314 filename0: (groupid=0, jobs=1): err= 0: pid=90657: Mon Jul 22 18:37:21 2024 00:30:10.314 read: IOPS=140, BW=562KiB/s (576kB/s)(5632KiB/10021msec) 00:30:10.314 slat (usec): min=5, max=8055, avg=43.38, stdev=400.53 00:30:10.314 clat (msec): min=37, max=203, avg=113.54, stdev=26.06 00:30:10.314 lat (msec): min=37, max=203, avg=113.58, stdev=26.07 00:30:10.314 clat percentiles (msec): 00:30:10.314 | 1.00th=[ 48], 5.00th=[ 85], 10.00th=[ 89], 20.00th=[ 94], 00:30:10.314 | 30.00th=[ 96], 40.00th=[ 103], 50.00th=[ 109], 60.00th=[ 117], 00:30:10.314 | 70.00th=[ 125], 80.00th=[ 134], 90.00th=[ 144], 95.00th=[ 157], 00:30:10.314 | 99.00th=[ 205], 99.50th=[ 205], 99.90th=[ 205], 99.95th=[ 205], 00:30:10.314 | 99.99th=[ 205] 00:30:10.314 bw ( KiB/s): min= 383, max= 750, per=3.85%, avg=558.00, stdev=101.97, samples=19 00:30:10.314 iops : min= 95, max= 187, avg=139.37, stdev=25.55, samples=19 00:30:10.314 lat (msec) : 50=1.28%, 100=36.86%, 250=61.86% 00:30:10.314 cpu : usr=36.17%, sys=2.32%, ctx=1232, majf=0, minf=1074 00:30:10.314 IO depths : 1=0.1%, 2=6.2%, 4=24.7%, 8=56.5%, 16=12.5%, 32=0.0%, >=64=0.0% 00:30:10.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.314 complete : 0=0.0%, 4=94.4%, 8=0.1%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.314 issued rwts: total=1408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.314 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:10.314 filename0: (groupid=0, jobs=1): err= 0: pid=90658: Mon Jul 22 18:37:21 2024 00:30:10.314 read: IOPS=153, BW=614KiB/s (629kB/s)(6164KiB/10039msec) 00:30:10.314 slat (usec): min=4, max=8052, avg=43.55, stdev=456.39 00:30:10.314 clat (msec): min=31, max=204, avg=103.82, stdev=31.25 00:30:10.314 lat (msec): min=31, max=204, avg=103.86, stdev=31.26 00:30:10.314 clat percentiles (msec): 00:30:10.314 | 1.00th=[ 37], 5.00th=[ 49], 10.00th=[ 64], 20.00th=[ 84], 00:30:10.314 | 30.00th=[ 93], 40.00th=[ 96], 50.00th=[ 99], 60.00th=[ 108], 00:30:10.314 | 70.00th=[ 117], 80.00th=[ 130], 90.00th=[ 144], 95.00th=[ 157], 00:30:10.314 | 99.00th=[ 203], 99.50th=[ 203], 99.90th=[ 205], 99.95th=[ 205], 00:30:10.314 | 99.99th=[ 205] 00:30:10.314 bw ( KiB/s): min= 368, max= 768, per=4.19%, avg=606.89, stdev=125.35, samples=19 00:30:10.314 iops : min= 92, max= 192, avg=151.58, stdev=31.38, samples=19 00:30:10.314 lat (msec) : 50=5.39%, 100=46.33%, 250=48.28% 00:30:10.314 cpu : usr=32.75%, sys=1.87%, ctx=959, majf=0, minf=1075 00:30:10.314 IO depths : 1=0.1%, 2=3.5%, 4=14.0%, 8=68.6%, 16=13.9%, 32=0.0%, >=64=0.0% 00:30:10.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.314 complete : 0=0.0%, 4=90.9%, 8=6.0%, 16=3.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.314 issued rwts: total=1541,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.314 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:10.314 filename0: (groupid=0, jobs=1): err= 0: pid=90659: Mon Jul 22 18:37:21 2024 00:30:10.314 read: IOPS=146, BW=588KiB/s (602kB/s)(5936KiB/10097msec) 00:30:10.314 slat (usec): min=4, max=8040, avg=25.19, stdev=208.83 00:30:10.314 clat (msec): min=9, max=214, avg=108.39, stdev=33.66 00:30:10.314 lat (msec): min=9, max=214, avg=108.41, stdev=33.66 00:30:10.314 clat percentiles (msec): 00:30:10.314 | 1.00th=[ 13], 5.00th=[ 41], 10.00th=[ 81], 20.00th=[ 93], 00:30:10.314 | 30.00th=[ 96], 40.00th=[ 97], 50.00th=[ 107], 60.00th=[ 114], 00:30:10.314 | 70.00th=[ 123], 80.00th=[ 132], 90.00th=[ 144], 95.00th=[ 167], 00:30:10.314 | 99.00th=[ 205], 99.50th=[ 215], 99.90th=[ 215], 99.95th=[ 215], 00:30:10.314 | 99.99th=[ 215] 00:30:10.314 bw ( KiB/s): min= 272, max= 1152, per=4.05%, avg=586.85, stdev=161.14, samples=20 00:30:10.314 iops : min= 68, max= 288, avg=146.65, stdev=40.30, samples=20 00:30:10.314 lat (msec) : 10=0.94%, 20=1.21%, 50=4.58%, 100=38.81%, 250=54.45% 00:30:10.314 cpu : usr=36.73%, sys=2.46%, ctx=1075, majf=0, minf=1072 00:30:10.314 IO depths : 1=0.1%, 2=5.7%, 4=22.8%, 8=58.6%, 16=12.8%, 32=0.0%, >=64=0.0% 00:30:10.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.314 complete : 0=0.0%, 4=93.8%, 8=1.1%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.314 issued rwts: total=1484,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.314 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:10.314 filename0: (groupid=0, jobs=1): err= 0: pid=90660: Mon Jul 22 18:37:21 2024 00:30:10.314 read: IOPS=165, BW=661KiB/s (677kB/s)(6648KiB/10054msec) 00:30:10.314 slat (usec): min=6, max=16060, avg=46.56, stdev=489.22 00:30:10.314 clat (msec): min=24, max=210, avg=96.32, stdev=30.36 00:30:10.314 lat (msec): min=24, max=210, avg=96.37, stdev=30.35 00:30:10.314 clat percentiles (msec): 00:30:10.314 | 1.00th=[ 36], 5.00th=[ 46], 10.00th=[ 57], 20.00th=[ 72], 00:30:10.314 | 30.00th=[ 83], 40.00th=[ 90], 50.00th=[ 96], 60.00th=[ 102], 00:30:10.314 | 70.00th=[ 109], 80.00th=[ 121], 90.00th=[ 136], 95.00th=[ 144], 00:30:10.314 | 99.00th=[ 190], 99.50th=[ 203], 99.90th=[ 211], 99.95th=[ 211], 00:30:10.314 | 99.99th=[ 211] 00:30:10.314 bw ( KiB/s): min= 416, max= 808, per=4.55%, avg=659.75, stdev=100.25, samples=20 00:30:10.314 iops : min= 104, max= 202, avg=164.75, stdev=25.10, samples=20 00:30:10.314 lat (msec) : 50=8.42%, 100=50.66%, 250=40.91% 00:30:10.315 cpu : usr=38.50%, sys=2.81%, ctx=1330, majf=0, minf=1063 00:30:10.315 IO depths : 1=0.1%, 2=0.3%, 4=1.1%, 8=82.6%, 16=15.9%, 32=0.0%, >=64=0.0% 00:30:10.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.315 complete : 0=0.0%, 4=87.3%, 8=12.4%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.315 issued rwts: total=1662,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.315 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:10.315 filename0: (groupid=0, jobs=1): err= 0: pid=90661: Mon Jul 22 18:37:21 2024 00:30:10.315 read: IOPS=141, BW=568KiB/s (582kB/s)(5688KiB/10016msec) 00:30:10.315 slat (usec): min=7, max=4057, avg=34.76, stdev=213.39 00:30:10.315 clat (msec): min=37, max=209, avg=112.39, stdev=26.44 00:30:10.315 lat (msec): min=37, max=209, avg=112.43, stdev=26.45 00:30:10.315 clat percentiles (msec): 00:30:10.315 | 1.00th=[ 48], 5.00th=[ 82], 10.00th=[ 88], 20.00th=[ 94], 00:30:10.315 | 30.00th=[ 96], 40.00th=[ 101], 50.00th=[ 107], 60.00th=[ 112], 00:30:10.315 | 70.00th=[ 125], 80.00th=[ 132], 90.00th=[ 148], 95.00th=[ 157], 00:30:10.315 | 99.00th=[ 201], 99.50th=[ 209], 99.90th=[ 209], 99.95th=[ 209], 00:30:10.315 | 99.99th=[ 209] 00:30:10.315 bw ( KiB/s): min= 272, max= 750, per=3.90%, avg=564.74, stdev=110.32, samples=19 00:30:10.315 iops : min= 68, max= 187, avg=141.05, stdev=27.58, samples=19 00:30:10.315 lat (msec) : 50=1.97%, 100=37.55%, 250=60.48% 00:30:10.315 cpu : usr=40.26%, sys=2.98%, ctx=1173, majf=0, minf=1075 00:30:10.315 IO depths : 1=0.1%, 2=6.3%, 4=25.0%, 8=56.2%, 16=12.4%, 32=0.0%, >=64=0.0% 00:30:10.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.315 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.315 issued rwts: total=1422,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.315 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:10.315 filename0: (groupid=0, jobs=1): err= 0: pid=90662: Mon Jul 22 18:37:21 2024 00:30:10.315 read: IOPS=152, BW=609KiB/s (623kB/s)(6128KiB/10065msec) 00:30:10.315 slat (usec): min=6, max=12068, avg=48.65, stdev=477.71 00:30:10.315 clat (msec): min=34, max=204, avg=104.61, stdev=32.78 00:30:10.315 lat (msec): min=34, max=204, avg=104.66, stdev=32.78 00:30:10.315 clat percentiles (msec): 00:30:10.315 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 61], 20.00th=[ 83], 00:30:10.315 | 30.00th=[ 94], 40.00th=[ 96], 50.00th=[ 99], 60.00th=[ 108], 00:30:10.315 | 70.00th=[ 121], 80.00th=[ 132], 90.00th=[ 144], 95.00th=[ 157], 00:30:10.315 | 99.00th=[ 205], 99.50th=[ 205], 99.90th=[ 205], 99.95th=[ 205], 00:30:10.315 | 99.99th=[ 205] 00:30:10.315 bw ( KiB/s): min= 368, max= 880, per=4.19%, avg=607.30, stdev=132.22, samples=20 00:30:10.315 iops : min= 92, max= 220, avg=151.60, stdev=33.21, samples=20 00:30:10.315 lat (msec) : 50=6.79%, 100=44.45%, 250=48.76% 00:30:10.315 cpu : usr=31.23%, sys=1.79%, ctx=896, majf=0, minf=1073 00:30:10.315 IO depths : 1=0.1%, 2=2.9%, 4=11.9%, 8=70.6%, 16=14.5%, 32=0.0%, >=64=0.0% 00:30:10.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.315 complete : 0=0.0%, 4=90.5%, 8=6.8%, 16=2.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.315 issued rwts: total=1532,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.315 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:10.315 filename1: (groupid=0, jobs=1): err= 0: pid=90663: Mon Jul 22 18:37:21 2024 00:30:10.315 read: IOPS=141, BW=566KiB/s (580kB/s)(5688KiB/10045msec) 00:30:10.315 slat (usec): min=4, max=8048, avg=43.87, stdev=438.17 00:30:10.315 clat (msec): min=35, max=300, avg=112.74, stdev=29.08 00:30:10.315 lat (msec): min=35, max=300, avg=112.78, stdev=29.08 00:30:10.315 clat percentiles (msec): 00:30:10.315 | 1.00th=[ 39], 5.00th=[ 82], 10.00th=[ 89], 20.00th=[ 95], 00:30:10.315 | 30.00th=[ 97], 40.00th=[ 101], 50.00th=[ 108], 60.00th=[ 114], 00:30:10.315 | 70.00th=[ 121], 80.00th=[ 132], 90.00th=[ 146], 95.00th=[ 155], 00:30:10.315 | 99.00th=[ 249], 99.50th=[ 249], 99.90th=[ 300], 99.95th=[ 300], 00:30:10.315 | 99.99th=[ 300] 00:30:10.315 bw ( KiB/s): min= 368, max= 752, per=3.85%, avg=558.84, stdev=102.53, samples=19 00:30:10.315 iops : min= 92, max= 188, avg=139.58, stdev=25.67, samples=19 00:30:10.315 lat (msec) : 50=2.11%, 100=38.19%, 250=59.56%, 500=0.14% 00:30:10.315 cpu : usr=34.22%, sys=2.29%, ctx=1048, majf=0, minf=1073 00:30:10.315 IO depths : 1=0.1%, 2=6.3%, 4=25.0%, 8=56.2%, 16=12.4%, 32=0.0%, >=64=0.0% 00:30:10.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.315 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.315 issued rwts: total=1422,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.315 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:10.315 filename1: (groupid=0, jobs=1): err= 0: pid=90664: Mon Jul 22 18:37:21 2024 00:30:10.315 read: IOPS=150, BW=601KiB/s (615kB/s)(6060KiB/10082msec) 00:30:10.315 slat (usec): min=4, max=8079, avg=47.23, stdev=399.86 00:30:10.315 clat (msec): min=12, max=252, avg=105.90, stdev=34.64 00:30:10.315 lat (msec): min=12, max=260, avg=105.95, stdev=34.68 00:30:10.315 clat percentiles (msec): 00:30:10.315 | 1.00th=[ 14], 5.00th=[ 42], 10.00th=[ 64], 20.00th=[ 88], 00:30:10.315 | 30.00th=[ 93], 40.00th=[ 96], 50.00th=[ 103], 60.00th=[ 112], 00:30:10.315 | 70.00th=[ 122], 80.00th=[ 132], 90.00th=[ 144], 95.00th=[ 159], 00:30:10.315 | 99.00th=[ 211], 99.50th=[ 211], 99.90th=[ 253], 99.95th=[ 253], 00:30:10.315 | 99.99th=[ 253] 00:30:10.315 bw ( KiB/s): min= 368, max= 1152, per=4.14%, avg=599.45, stdev=167.86, samples=20 00:30:10.315 iops : min= 92, max= 288, avg=149.80, stdev=42.01, samples=20 00:30:10.315 lat (msec) : 20=1.98%, 50=5.81%, 100=41.78%, 250=50.30%, 500=0.13% 00:30:10.315 cpu : usr=38.41%, sys=2.83%, ctx=1183, majf=0, minf=1075 00:30:10.315 IO depths : 1=0.1%, 2=4.9%, 4=19.2%, 8=62.5%, 16=13.3%, 32=0.0%, >=64=0.0% 00:30:10.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.315 complete : 0=0.0%, 4=92.7%, 8=3.1%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.315 issued rwts: total=1515,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.315 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:10.315 filename1: (groupid=0, jobs=1): err= 0: pid=90665: Mon Jul 22 18:37:21 2024 00:30:10.315 read: IOPS=142, BW=569KiB/s (582kB/s)(5716KiB/10051msec) 00:30:10.315 slat (usec): min=4, max=9051, avg=42.64, stdev=355.22 00:30:10.315 clat (msec): min=35, max=255, avg=112.01, stdev=31.28 00:30:10.315 lat (msec): min=35, max=255, avg=112.05, stdev=31.27 00:30:10.315 clat percentiles (msec): 00:30:10.315 | 1.00th=[ 41], 5.00th=[ 62], 10.00th=[ 74], 20.00th=[ 91], 00:30:10.315 | 30.00th=[ 96], 40.00th=[ 101], 50.00th=[ 108], 60.00th=[ 121], 00:30:10.315 | 70.00th=[ 129], 80.00th=[ 138], 90.00th=[ 148], 95.00th=[ 157], 00:30:10.315 | 99.00th=[ 213], 99.50th=[ 213], 99.90th=[ 255], 99.95th=[ 255], 00:30:10.315 | 99.99th=[ 255] 00:30:10.315 bw ( KiB/s): min= 384, max= 768, per=3.92%, avg=567.35, stdev=113.59, samples=20 00:30:10.315 iops : min= 96, max= 192, avg=141.70, stdev=28.38, samples=20 00:30:10.315 lat (msec) : 50=3.36%, 100=36.81%, 250=59.69%, 500=0.14% 00:30:10.315 cpu : usr=34.35%, sys=2.03%, ctx=1083, majf=0, minf=1075 00:30:10.315 IO depths : 1=0.1%, 2=4.6%, 4=18.4%, 8=63.5%, 16=13.4%, 32=0.0%, >=64=0.0% 00:30:10.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.315 complete : 0=0.0%, 4=92.4%, 8=3.6%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.315 issued rwts: total=1429,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.315 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:10.315 filename1: (groupid=0, jobs=1): err= 0: pid=90666: Mon Jul 22 18:37:21 2024 00:30:10.315 read: IOPS=168, BW=675KiB/s (691kB/s)(6776KiB/10037msec) 00:30:10.315 slat (nsec): min=6913, max=89788, avg=19416.14, stdev=9144.90 00:30:10.315 clat (msec): min=24, max=212, avg=94.57, stdev=30.91 00:30:10.315 lat (msec): min=24, max=212, avg=94.59, stdev=30.91 00:30:10.315 clat percentiles (msec): 00:30:10.315 | 1.00th=[ 32], 5.00th=[ 46], 10.00th=[ 57], 20.00th=[ 72], 00:30:10.315 | 30.00th=[ 78], 40.00th=[ 88], 50.00th=[ 96], 60.00th=[ 97], 00:30:10.315 | 70.00th=[ 108], 80.00th=[ 120], 90.00th=[ 133], 95.00th=[ 144], 00:30:10.315 | 99.00th=[ 192], 99.50th=[ 205], 99.90th=[ 213], 99.95th=[ 213], 00:30:10.315 | 99.99th=[ 213] 00:30:10.315 bw ( KiB/s): min= 360, max= 785, per=4.64%, avg=671.42, stdev=112.03, samples=19 00:30:10.315 iops : min= 90, max= 196, avg=167.74, stdev=28.01, samples=19 00:30:10.315 lat (msec) : 50=8.21%, 100=54.66%, 250=37.13% 00:30:10.315 cpu : usr=35.00%, sys=2.36%, ctx=979, majf=0, minf=1074 00:30:10.315 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=83.4%, 16=15.7%, 32=0.0%, >=64=0.0% 00:30:10.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.315 complete : 0=0.0%, 4=86.8%, 8=13.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.315 issued rwts: total=1694,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.315 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:10.315 filename1: (groupid=0, jobs=1): err= 0: pid=90667: Mon Jul 22 18:37:21 2024 00:30:10.315 read: IOPS=160, BW=641KiB/s (657kB/s)(6456KiB/10068msec) 00:30:10.315 slat (usec): min=4, max=8075, avg=32.55, stdev=282.93 00:30:10.315 clat (msec): min=24, max=211, avg=99.47, stdev=32.83 00:30:10.315 lat (msec): min=24, max=211, avg=99.50, stdev=32.83 00:30:10.315 clat percentiles (msec): 00:30:10.315 | 1.00th=[ 34], 5.00th=[ 46], 10.00th=[ 60], 20.00th=[ 72], 00:30:10.315 | 30.00th=[ 84], 40.00th=[ 95], 50.00th=[ 96], 60.00th=[ 106], 00:30:10.315 | 70.00th=[ 116], 80.00th=[ 130], 90.00th=[ 142], 95.00th=[ 155], 00:30:10.315 | 99.00th=[ 192], 99.50th=[ 205], 99.90th=[ 211], 99.95th=[ 211], 00:30:10.315 | 99.99th=[ 211] 00:30:10.315 bw ( KiB/s): min= 399, max= 890, per=4.41%, avg=639.55, stdev=129.91, samples=20 00:30:10.315 iops : min= 99, max= 222, avg=159.70, stdev=32.54, samples=20 00:30:10.315 lat (msec) : 50=8.67%, 100=46.90%, 250=44.42% 00:30:10.315 cpu : usr=31.02%, sys=2.29%, ctx=889, majf=0, minf=1075 00:30:10.315 IO depths : 1=0.1%, 2=1.5%, 4=5.9%, 8=77.4%, 16=15.1%, 32=0.0%, >=64=0.0% 00:30:10.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.315 complete : 0=0.0%, 4=88.5%, 8=10.2%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.315 issued rwts: total=1614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.315 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:10.315 filename1: (groupid=0, jobs=1): err= 0: pid=90668: Mon Jul 22 18:37:21 2024 00:30:10.315 read: IOPS=159, BW=639KiB/s (655kB/s)(6396KiB/10003msec) 00:30:10.315 slat (usec): min=6, max=8037, avg=23.77, stdev=200.80 00:30:10.316 clat (msec): min=2, max=214, avg=99.96, stdev=37.96 00:30:10.316 lat (msec): min=2, max=214, avg=99.98, stdev=37.96 00:30:10.316 clat percentiles (msec): 00:30:10.316 | 1.00th=[ 4], 5.00th=[ 19], 10.00th=[ 55], 20.00th=[ 72], 00:30:10.316 | 30.00th=[ 86], 40.00th=[ 96], 50.00th=[ 99], 60.00th=[ 108], 00:30:10.316 | 70.00th=[ 121], 80.00th=[ 132], 90.00th=[ 144], 95.00th=[ 157], 00:30:10.316 | 99.00th=[ 205], 99.50th=[ 215], 99.90th=[ 215], 99.95th=[ 215], 00:30:10.316 | 99.99th=[ 215] 00:30:10.316 bw ( KiB/s): min= 383, max= 816, per=4.14%, avg=599.95, stdev=137.82, samples=19 00:30:10.316 iops : min= 95, max= 204, avg=149.89, stdev=34.51, samples=19 00:30:10.316 lat (msec) : 4=2.00%, 10=2.19%, 20=0.88%, 50=4.13%, 100=42.71% 00:30:10.316 lat (msec) : 250=48.09% 00:30:10.316 cpu : usr=32.94%, sys=2.55%, ctx=927, majf=0, minf=1075 00:30:10.316 IO depths : 1=0.1%, 2=2.8%, 4=10.9%, 8=71.9%, 16=14.4%, 32=0.0%, >=64=0.0% 00:30:10.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.316 complete : 0=0.0%, 4=90.1%, 8=7.5%, 16=2.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.316 issued rwts: total=1599,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.316 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:10.316 filename1: (groupid=0, jobs=1): err= 0: pid=90669: Mon Jul 22 18:37:21 2024 00:30:10.316 read: IOPS=148, BW=595KiB/s (609kB/s)(5952KiB/10003msec) 00:30:10.316 slat (usec): min=6, max=8052, avg=37.08, stdev=306.18 00:30:10.316 clat (msec): min=2, max=211, avg=107.30, stdev=34.55 00:30:10.316 lat (msec): min=2, max=211, avg=107.34, stdev=34.57 00:30:10.316 clat percentiles (msec): 00:30:10.316 | 1.00th=[ 3], 5.00th=[ 13], 10.00th=[ 84], 20.00th=[ 94], 00:30:10.316 | 30.00th=[ 96], 40.00th=[ 97], 50.00th=[ 108], 60.00th=[ 112], 00:30:10.316 | 70.00th=[ 121], 80.00th=[ 130], 90.00th=[ 142], 95.00th=[ 163], 00:30:10.316 | 99.00th=[ 209], 99.50th=[ 209], 99.90th=[ 211], 99.95th=[ 211], 00:30:10.316 | 99.99th=[ 211] 00:30:10.316 bw ( KiB/s): min= 384, max= 640, per=3.85%, avg=558.84, stdev=93.08, samples=19 00:30:10.316 iops : min= 96, max= 160, avg=139.63, stdev=23.24, samples=19 00:30:10.316 lat (msec) : 4=2.15%, 10=2.15%, 20=1.08%, 50=0.13%, 100=37.23% 00:30:10.316 lat (msec) : 250=57.26% 00:30:10.316 cpu : usr=34.44%, sys=2.55%, ctx=1179, majf=0, minf=1075 00:30:10.316 IO depths : 1=0.1%, 2=6.4%, 4=25.0%, 8=56.1%, 16=12.4%, 32=0.0%, >=64=0.0% 00:30:10.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.316 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.316 issued rwts: total=1488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.316 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:10.316 filename1: (groupid=0, jobs=1): err= 0: pid=90670: Mon Jul 22 18:37:21 2024 00:30:10.316 read: IOPS=165, BW=660KiB/s (676kB/s)(6644KiB/10062msec) 00:30:10.316 slat (usec): min=4, max=8043, avg=30.58, stdev=315.15 00:30:10.316 clat (msec): min=24, max=217, avg=96.60, stdev=31.43 00:30:10.316 lat (msec): min=24, max=217, avg=96.63, stdev=31.44 00:30:10.316 clat percentiles (msec): 00:30:10.316 | 1.00th=[ 36], 5.00th=[ 42], 10.00th=[ 56], 20.00th=[ 72], 00:30:10.316 | 30.00th=[ 85], 40.00th=[ 89], 50.00th=[ 96], 60.00th=[ 104], 00:30:10.316 | 70.00th=[ 109], 80.00th=[ 121], 90.00th=[ 134], 95.00th=[ 148], 00:30:10.316 | 99.00th=[ 197], 99.50th=[ 199], 99.90th=[ 209], 99.95th=[ 218], 00:30:10.316 | 99.99th=[ 218] 00:30:10.316 bw ( KiB/s): min= 464, max= 1045, per=4.54%, avg=657.25, stdev=127.38, samples=20 00:30:10.316 iops : min= 116, max= 261, avg=164.20, stdev=31.78, samples=20 00:30:10.316 lat (msec) : 50=8.43%, 100=49.01%, 250=42.56% 00:30:10.316 cpu : usr=34.32%, sys=1.97%, ctx=1048, majf=0, minf=1074 00:30:10.316 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=82.8%, 16=16.3%, 32=0.0%, >=64=0.0% 00:30:10.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.316 complete : 0=0.0%, 4=87.4%, 8=12.5%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.316 issued rwts: total=1661,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.316 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:10.316 filename2: (groupid=0, jobs=1): err= 0: pid=90671: Mon Jul 22 18:37:21 2024 00:30:10.316 read: IOPS=146, BW=587KiB/s (601kB/s)(5916KiB/10079msec) 00:30:10.316 slat (usec): min=5, max=8060, avg=30.56, stdev=295.47 00:30:10.316 clat (msec): min=28, max=203, avg=108.62, stdev=31.49 00:30:10.316 lat (msec): min=28, max=203, avg=108.65, stdev=31.48 00:30:10.316 clat percentiles (msec): 00:30:10.316 | 1.00th=[ 37], 5.00th=[ 48], 10.00th=[ 70], 20.00th=[ 87], 00:30:10.316 | 30.00th=[ 96], 40.00th=[ 99], 50.00th=[ 106], 60.00th=[ 111], 00:30:10.316 | 70.00th=[ 125], 80.00th=[ 134], 90.00th=[ 144], 95.00th=[ 157], 00:30:10.316 | 99.00th=[ 203], 99.50th=[ 205], 99.90th=[ 205], 99.95th=[ 205], 00:30:10.316 | 99.99th=[ 205] 00:30:10.316 bw ( KiB/s): min= 271, max= 896, per=4.04%, avg=585.15, stdev=137.35, samples=20 00:30:10.316 iops : min= 67, max= 224, avg=146.20, stdev=34.45, samples=20 00:30:10.316 lat (msec) : 50=5.81%, 100=36.38%, 250=57.81% 00:30:10.316 cpu : usr=33.88%, sys=2.23%, ctx=955, majf=0, minf=1075 00:30:10.316 IO depths : 1=0.1%, 2=4.4%, 4=17.5%, 8=64.4%, 16=13.6%, 32=0.0%, >=64=0.0% 00:30:10.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.316 complete : 0=0.0%, 4=92.1%, 8=4.0%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.316 issued rwts: total=1479,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.316 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:10.316 filename2: (groupid=0, jobs=1): err= 0: pid=90672: Mon Jul 22 18:37:21 2024 00:30:10.316 read: IOPS=140, BW=563KiB/s (576kB/s)(5632KiB/10007msec) 00:30:10.316 slat (usec): min=5, max=4037, avg=34.63, stdev=234.65 00:30:10.316 clat (msec): min=39, max=213, avg=113.41, stdev=25.08 00:30:10.316 lat (msec): min=39, max=214, avg=113.45, stdev=25.07 00:30:10.316 clat percentiles (msec): 00:30:10.316 | 1.00th=[ 62], 5.00th=[ 85], 10.00th=[ 88], 20.00th=[ 95], 00:30:10.316 | 30.00th=[ 97], 40.00th=[ 105], 50.00th=[ 109], 60.00th=[ 113], 00:30:10.316 | 70.00th=[ 125], 80.00th=[ 132], 90.00th=[ 144], 95.00th=[ 159], 00:30:10.316 | 99.00th=[ 213], 99.50th=[ 213], 99.90th=[ 215], 99.95th=[ 215], 00:30:10.316 | 99.99th=[ 215] 00:30:10.316 bw ( KiB/s): min= 271, max= 752, per=3.85%, avg=557.79, stdev=109.52, samples=19 00:30:10.316 iops : min= 67, max= 188, avg=139.32, stdev=27.50, samples=19 00:30:10.316 lat (msec) : 50=0.14%, 100=36.01%, 250=63.85% 00:30:10.316 cpu : usr=44.33%, sys=2.87%, ctx=1186, majf=0, minf=1072 00:30:10.316 IO depths : 1=0.1%, 2=6.3%, 4=25.0%, 8=56.2%, 16=12.4%, 32=0.0%, >=64=0.0% 00:30:10.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.316 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.316 issued rwts: total=1408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.316 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:10.316 filename2: (groupid=0, jobs=1): err= 0: pid=90673: Mon Jul 22 18:37:21 2024 00:30:10.316 read: IOPS=151, BW=606KiB/s (621kB/s)(6112KiB/10081msec) 00:30:10.316 slat (usec): min=6, max=9055, avg=61.27, stdev=553.37 00:30:10.316 clat (msec): min=31, max=203, avg=104.92, stdev=31.57 00:30:10.316 lat (msec): min=31, max=203, avg=104.98, stdev=31.60 00:30:10.316 clat percentiles (msec): 00:30:10.316 | 1.00th=[ 33], 5.00th=[ 47], 10.00th=[ 70], 20.00th=[ 85], 00:30:10.316 | 30.00th=[ 95], 40.00th=[ 96], 50.00th=[ 106], 60.00th=[ 109], 00:30:10.316 | 70.00th=[ 121], 80.00th=[ 132], 90.00th=[ 144], 95.00th=[ 155], 00:30:10.316 | 99.00th=[ 197], 99.50th=[ 201], 99.90th=[ 205], 99.95th=[ 205], 00:30:10.316 | 99.99th=[ 205] 00:30:10.316 bw ( KiB/s): min= 383, max= 896, per=4.19%, avg=606.90, stdev=131.86, samples=20 00:30:10.316 iops : min= 95, max= 224, avg=151.65, stdev=33.03, samples=20 00:30:10.316 lat (msec) : 50=6.54%, 100=42.41%, 250=51.05% 00:30:10.316 cpu : usr=30.96%, sys=2.20%, ctx=905, majf=0, minf=1072 00:30:10.316 IO depths : 1=0.1%, 2=2.8%, 4=11.2%, 8=71.1%, 16=14.9%, 32=0.0%, >=64=0.0% 00:30:10.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.316 complete : 0=0.0%, 4=90.5%, 8=7.0%, 16=2.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.316 issued rwts: total=1528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.316 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:10.316 filename2: (groupid=0, jobs=1): err= 0: pid=90674: Mon Jul 22 18:37:21 2024 00:30:10.316 read: IOPS=142, BW=568KiB/s (582kB/s)(5704KiB/10042msec) 00:30:10.316 slat (usec): min=4, max=8067, avg=48.22, stdev=425.46 00:30:10.316 clat (msec): min=33, max=206, avg=112.08, stdev=27.75 00:30:10.316 lat (msec): min=33, max=206, avg=112.13, stdev=27.75 00:30:10.316 clat percentiles (msec): 00:30:10.316 | 1.00th=[ 39], 5.00th=[ 72], 10.00th=[ 86], 20.00th=[ 95], 00:30:10.316 | 30.00th=[ 96], 40.00th=[ 104], 50.00th=[ 108], 60.00th=[ 116], 00:30:10.316 | 70.00th=[ 124], 80.00th=[ 132], 90.00th=[ 146], 95.00th=[ 157], 00:30:10.316 | 99.00th=[ 203], 99.50th=[ 205], 99.90th=[ 207], 99.95th=[ 207], 00:30:10.316 | 99.99th=[ 207] 00:30:10.316 bw ( KiB/s): min= 368, max= 752, per=3.85%, avg=558.26, stdev=102.06, samples=19 00:30:10.316 iops : min= 92, max= 188, avg=139.42, stdev=25.53, samples=19 00:30:10.316 lat (msec) : 50=2.52%, 100=34.85%, 250=62.62% 00:30:10.316 cpu : usr=35.59%, sys=2.26%, ctx=1031, majf=0, minf=1074 00:30:10.316 IO depths : 1=0.1%, 2=5.8%, 4=23.2%, 8=58.2%, 16=12.7%, 32=0.0%, >=64=0.0% 00:30:10.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.316 complete : 0=0.0%, 4=93.8%, 8=1.0%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.316 issued rwts: total=1426,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.316 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:10.316 filename2: (groupid=0, jobs=1): err= 0: pid=90675: Mon Jul 22 18:37:21 2024 00:30:10.316 read: IOPS=138, BW=555KiB/s (568kB/s)(5568KiB/10031msec) 00:30:10.316 slat (usec): min=8, max=8066, avg=40.16, stdev=356.97 00:30:10.316 clat (msec): min=47, max=203, avg=114.81, stdev=25.82 00:30:10.316 lat (msec): min=47, max=203, avg=114.85, stdev=25.82 00:30:10.316 clat percentiles (msec): 00:30:10.316 | 1.00th=[ 48], 5.00th=[ 85], 10.00th=[ 88], 20.00th=[ 96], 00:30:10.316 | 30.00th=[ 97], 40.00th=[ 103], 50.00th=[ 108], 60.00th=[ 117], 00:30:10.316 | 70.00th=[ 127], 80.00th=[ 136], 90.00th=[ 146], 95.00th=[ 157], 00:30:10.316 | 99.00th=[ 205], 99.50th=[ 205], 99.90th=[ 205], 99.95th=[ 205], 00:30:10.316 | 99.99th=[ 205] 00:30:10.316 bw ( KiB/s): min= 368, max= 640, per=3.81%, avg=552.00, stdev=90.46, samples=19 00:30:10.316 iops : min= 92, max= 160, avg=137.84, stdev=22.61, samples=19 00:30:10.316 lat (msec) : 50=1.01%, 100=34.20%, 250=64.80% 00:30:10.317 cpu : usr=37.69%, sys=2.32%, ctx=1021, majf=0, minf=1073 00:30:10.317 IO depths : 1=0.1%, 2=6.4%, 4=25.0%, 8=56.1%, 16=12.4%, 32=0.0%, >=64=0.0% 00:30:10.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.317 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.317 issued rwts: total=1392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.317 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:10.317 filename2: (groupid=0, jobs=1): err= 0: pid=90676: Mon Jul 22 18:37:21 2024 00:30:10.317 read: IOPS=141, BW=566KiB/s (580kB/s)(5696KiB/10061msec) 00:30:10.317 slat (usec): min=6, max=8054, avg=30.00, stdev=238.25 00:30:10.317 clat (msec): min=32, max=204, avg=112.56, stdev=28.70 00:30:10.317 lat (msec): min=32, max=205, avg=112.59, stdev=28.71 00:30:10.317 clat percentiles (msec): 00:30:10.317 | 1.00th=[ 34], 5.00th=[ 82], 10.00th=[ 88], 20.00th=[ 94], 00:30:10.317 | 30.00th=[ 97], 40.00th=[ 102], 50.00th=[ 110], 60.00th=[ 116], 00:30:10.317 | 70.00th=[ 121], 80.00th=[ 134], 90.00th=[ 144], 95.00th=[ 157], 00:30:10.317 | 99.00th=[ 201], 99.50th=[ 201], 99.90th=[ 205], 99.95th=[ 205], 00:30:10.317 | 99.99th=[ 205] 00:30:10.317 bw ( KiB/s): min= 368, max= 881, per=3.88%, avg=561.90, stdev=116.26, samples=20 00:30:10.317 iops : min= 92, max= 220, avg=140.25, stdev=29.14, samples=20 00:30:10.317 lat (msec) : 50=3.23%, 100=33.43%, 250=63.34% 00:30:10.317 cpu : usr=38.94%, sys=2.57%, ctx=1241, majf=0, minf=1073 00:30:10.317 IO depths : 1=0.1%, 2=6.2%, 4=24.7%, 8=56.5%, 16=12.5%, 32=0.0%, >=64=0.0% 00:30:10.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.317 complete : 0=0.0%, 4=94.4%, 8=0.1%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.317 issued rwts: total=1424,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.317 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:10.317 filename2: (groupid=0, jobs=1): err= 0: pid=90677: Mon Jul 22 18:37:21 2024 00:30:10.317 read: IOPS=160, BW=641KiB/s (657kB/s)(6428KiB/10023msec) 00:30:10.317 slat (usec): min=6, max=8076, avg=32.80, stdev=283.65 00:30:10.317 clat (msec): min=25, max=206, avg=99.52, stdev=30.64 00:30:10.317 lat (msec): min=25, max=206, avg=99.56, stdev=30.64 00:30:10.317 clat percentiles (msec): 00:30:10.317 | 1.00th=[ 35], 5.00th=[ 48], 10.00th=[ 61], 20.00th=[ 73], 00:30:10.317 | 30.00th=[ 85], 40.00th=[ 96], 50.00th=[ 97], 60.00th=[ 107], 00:30:10.317 | 70.00th=[ 110], 80.00th=[ 124], 90.00th=[ 136], 95.00th=[ 144], 00:30:10.317 | 99.00th=[ 190], 99.50th=[ 205], 99.90th=[ 207], 99.95th=[ 207], 00:30:10.317 | 99.99th=[ 207] 00:30:10.317 bw ( KiB/s): min= 408, max= 822, per=4.38%, avg=634.42, stdev=124.01, samples=19 00:30:10.317 iops : min= 102, max= 205, avg=158.47, stdev=31.03, samples=19 00:30:10.317 lat (msec) : 50=6.85%, 100=47.98%, 250=45.18% 00:30:10.317 cpu : usr=30.84%, sys=2.27%, ctx=905, majf=0, minf=1072 00:30:10.317 IO depths : 1=0.1%, 2=1.4%, 4=5.6%, 8=77.9%, 16=15.1%, 32=0.0%, >=64=0.0% 00:30:10.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.317 complete : 0=0.0%, 4=88.4%, 8=10.4%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.317 issued rwts: total=1607,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.317 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:10.317 filename2: (groupid=0, jobs=1): err= 0: pid=90678: Mon Jul 22 18:37:21 2024 00:30:10.317 read: IOPS=170, BW=681KiB/s (697kB/s)(6848KiB/10063msec) 00:30:10.317 slat (usec): min=6, max=12048, avg=35.24, stdev=330.37 00:30:10.317 clat (msec): min=2, max=215, avg=93.40, stdev=50.48 00:30:10.317 lat (msec): min=2, max=215, avg=93.44, stdev=50.49 00:30:10.317 clat percentiles (msec): 00:30:10.317 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 43], 00:30:10.317 | 30.00th=[ 93], 40.00th=[ 96], 50.00th=[ 100], 60.00th=[ 110], 00:30:10.317 | 70.00th=[ 121], 80.00th=[ 132], 90.00th=[ 142], 95.00th=[ 165], 00:30:10.317 | 99.00th=[ 205], 99.50th=[ 215], 99.90th=[ 215], 99.95th=[ 215], 00:30:10.317 | 99.99th=[ 215] 00:30:10.317 bw ( KiB/s): min= 272, max= 3312, per=4.68%, avg=678.20, stdev=628.04, samples=20 00:30:10.317 iops : min= 68, max= 828, avg=169.50, stdev=157.02, samples=20 00:30:10.317 lat (msec) : 4=11.21%, 10=5.49%, 20=1.99%, 50=3.50%, 100=29.73% 00:30:10.317 lat (msec) : 250=48.07% 00:30:10.317 cpu : usr=40.34%, sys=2.21%, ctx=1171, majf=0, minf=1062 00:30:10.317 IO depths : 1=1.1%, 2=7.3%, 4=25.0%, 8=55.2%, 16=11.4%, 32=0.0%, >=64=0.0% 00:30:10.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.317 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.317 issued rwts: total=1712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.317 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:10.317 00:30:10.317 Run status group 0 (all jobs): 00:30:10.317 READ: bw=14.1MiB/s (14.8MB/s), 555KiB/s-681KiB/s (568kB/s-697kB/s), io=143MiB (150MB), run=10003-10097msec 00:30:10.905 ----------------------------------------------------- 00:30:10.905 Suppressions used: 00:30:10.905 count bytes template 00:30:10.905 45 402 /usr/src/fio/parse.c 00:30:10.905 1 8 libtcmalloc_minimal.so 00:30:10.905 1 904 libcrypto.so 00:30:10.905 ----------------------------------------------------- 00:30:10.905 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:10.905 bdev_null0 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:10.905 [2024-07-22 18:37:22.752425] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:10.905 bdev_null1 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:10.905 18:37:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:10.905 { 00:30:10.905 "params": { 00:30:10.905 "name": "Nvme$subsystem", 00:30:10.905 "trtype": "$TEST_TRANSPORT", 00:30:10.905 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:10.906 "adrfam": "ipv4", 00:30:10.906 "trsvcid": "$NVMF_PORT", 00:30:10.906 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:10.906 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:10.906 "hdgst": ${hdgst:-false}, 00:30:10.906 "ddgst": ${ddgst:-false} 00:30:10.906 }, 00:30:10.906 "method": "bdev_nvme_attach_controller" 00:30:10.906 } 00:30:10.906 EOF 00:30:10.906 )") 00:30:10.906 18:37:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:10.906 18:37:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:10.906 18:37:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:10.906 18:37:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:30:10.906 18:37:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:10.906 18:37:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:10.906 18:37:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:10.906 18:37:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:10.906 18:37:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:10.906 18:37:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:10.906 18:37:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:30:10.906 18:37:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:10.906 18:37:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:10.906 18:37:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:10.906 18:37:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:10.906 { 00:30:10.906 "params": { 00:30:10.906 "name": "Nvme$subsystem", 00:30:10.906 "trtype": "$TEST_TRANSPORT", 00:30:10.906 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:10.906 "adrfam": "ipv4", 00:30:10.906 "trsvcid": "$NVMF_PORT", 00:30:10.906 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:10.906 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:10.906 "hdgst": ${hdgst:-false}, 00:30:10.906 "ddgst": ${ddgst:-false} 00:30:10.906 }, 00:30:10.906 "method": "bdev_nvme_attach_controller" 00:30:10.906 } 00:30:10.906 EOF 00:30:10.906 )") 00:30:10.906 18:37:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:10.906 18:37:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:10.906 18:37:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:10.906 18:37:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:10.906 18:37:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:10.906 18:37:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:10.906 "params": { 00:30:10.906 "name": "Nvme0", 00:30:10.906 "trtype": "tcp", 00:30:10.906 "traddr": "10.0.0.2", 00:30:10.906 "adrfam": "ipv4", 00:30:10.906 "trsvcid": "4420", 00:30:10.906 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:10.906 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:10.906 "hdgst": false, 00:30:10.906 "ddgst": false 00:30:10.906 }, 00:30:10.906 "method": "bdev_nvme_attach_controller" 00:30:10.906 },{ 00:30:10.906 "params": { 00:30:10.906 "name": "Nvme1", 00:30:10.906 "trtype": "tcp", 00:30:10.906 "traddr": "10.0.0.2", 00:30:10.906 "adrfam": "ipv4", 00:30:10.906 "trsvcid": "4420", 00:30:10.906 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:10.906 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:10.906 "hdgst": false, 00:30:10.906 "ddgst": false 00:30:10.906 }, 00:30:10.906 "method": "bdev_nvme_attach_controller" 00:30:10.906 }' 00:30:10.906 18:37:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:30:10.906 18:37:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:30:10.906 18:37:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:30:10.906 18:37:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:10.906 18:37:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:11.163 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:11.163 ... 00:30:11.163 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:11.163 ... 00:30:11.163 fio-3.35 00:30:11.163 Starting 4 threads 00:30:17.719 00:30:17.719 filename0: (groupid=0, jobs=1): err= 0: pid=90814: Mon Jul 22 18:37:28 2024 00:30:17.719 read: IOPS=1487, BW=11.6MiB/s (12.2MB/s)(58.1MiB/5001msec) 00:30:17.719 slat (nsec): min=7731, max=69191, avg=19315.09, stdev=5341.59 00:30:17.719 clat (usec): min=1558, max=12144, avg=5304.69, stdev=728.59 00:30:17.719 lat (usec): min=1573, max=12203, avg=5324.00, stdev=728.69 00:30:17.719 clat percentiles (usec): 00:30:17.719 | 1.00th=[ 2540], 5.00th=[ 4228], 10.00th=[ 4817], 20.00th=[ 5145], 00:30:17.719 | 30.00th=[ 5145], 40.00th=[ 5211], 50.00th=[ 5276], 60.00th=[ 5276], 00:30:17.719 | 70.00th=[ 5342], 80.00th=[ 5604], 90.00th=[ 5800], 95.00th=[ 6390], 00:30:17.719 | 99.00th=[ 8094], 99.50th=[ 8356], 99.90th=[10290], 99.95th=[11469], 00:30:17.719 | 99.99th=[12125] 00:30:17.719 bw ( KiB/s): min=10992, max=12960, per=23.22%, avg=11877.33, stdev=530.54, samples=9 00:30:17.719 iops : min= 1374, max= 1620, avg=1484.67, stdev=66.32, samples=9 00:30:17.719 lat (msec) : 2=0.23%, 4=1.76%, 10=97.84%, 20=0.17% 00:30:17.719 cpu : usr=92.30%, sys=6.78%, ctx=11, majf=0, minf=1075 00:30:17.719 IO depths : 1=0.1%, 2=20.7%, 4=53.3%, 8=26.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:17.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.719 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.719 issued rwts: total=7441,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:17.719 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:17.719 filename0: (groupid=0, jobs=1): err= 0: pid=90815: Mon Jul 22 18:37:28 2024 00:30:17.719 read: IOPS=1838, BW=14.4MiB/s (15.1MB/s)(71.8MiB/5001msec) 00:30:17.719 slat (nsec): min=7620, max=64701, avg=14776.16, stdev=5123.34 00:30:17.719 clat (usec): min=975, max=13336, avg=4310.72, stdev=1462.41 00:30:17.719 lat (usec): min=986, max=13368, avg=4325.50, stdev=1462.75 00:30:17.719 clat percentiles (usec): 00:30:17.719 | 1.00th=[ 1729], 5.00th=[ 1778], 10.00th=[ 1811], 20.00th=[ 2737], 00:30:17.719 | 30.00th=[ 3687], 40.00th=[ 4228], 50.00th=[ 4817], 60.00th=[ 5014], 00:30:17.719 | 70.00th=[ 5145], 80.00th=[ 5276], 90.00th=[ 5473], 95.00th=[ 6063], 00:30:17.719 | 99.00th=[ 8225], 99.50th=[ 9241], 99.90th=[10814], 99.95th=[11207], 00:30:17.719 | 99.99th=[13304] 00:30:17.719 bw ( KiB/s): min=10384, max=16592, per=29.10%, avg=14887.11, stdev=2070.52, samples=9 00:30:17.719 iops : min= 1298, max= 2074, avg=1860.89, stdev=258.81, samples=9 00:30:17.719 lat (usec) : 1000=0.03% 00:30:17.719 lat (msec) : 2=13.88%, 4=20.38%, 10=65.54%, 20=0.17% 00:30:17.719 cpu : usr=91.94%, sys=7.00%, ctx=58, majf=0, minf=1074 00:30:17.719 IO depths : 1=0.1%, 2=3.9%, 4=62.2%, 8=33.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:17.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.719 complete : 0=0.0%, 4=98.5%, 8=1.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.719 issued rwts: total=9195,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:17.719 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:17.719 filename1: (groupid=0, jobs=1): err= 0: pid=90816: Mon Jul 22 18:37:28 2024 00:30:17.719 read: IOPS=1583, BW=12.4MiB/s (13.0MB/s)(61.9MiB/5003msec) 00:30:17.719 slat (nsec): min=5085, max=75368, avg=18437.04, stdev=5587.30 00:30:17.719 clat (usec): min=1772, max=12213, avg=4986.59, stdev=1045.15 00:30:17.719 lat (usec): min=1788, max=12230, avg=5005.02, stdev=1045.63 00:30:17.719 clat percentiles (usec): 00:30:17.719 | 1.00th=[ 2606], 5.00th=[ 2737], 10.00th=[ 3064], 20.00th=[ 4752], 00:30:17.719 | 30.00th=[ 5080], 40.00th=[ 5145], 50.00th=[ 5211], 60.00th=[ 5276], 00:30:17.719 | 70.00th=[ 5276], 80.00th=[ 5407], 90.00th=[ 5735], 95.00th=[ 6194], 00:30:17.719 | 99.00th=[ 8160], 99.50th=[ 8455], 99.90th=[10552], 99.95th=[11469], 00:30:17.719 | 99.99th=[12256] 00:30:17.719 bw ( KiB/s): min=10928, max=15424, per=24.76%, avg=12665.60, stdev=1412.12, samples=10 00:30:17.719 iops : min= 1366, max= 1928, avg=1583.20, stdev=176.51, samples=10 00:30:17.720 lat (msec) : 2=0.03%, 4=13.76%, 10=86.03%, 20=0.19% 00:30:17.720 cpu : usr=91.70%, sys=7.18%, ctx=876, majf=0, minf=1074 00:30:17.720 IO depths : 1=0.1%, 2=15.4%, 4=56.2%, 8=28.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:17.720 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.720 complete : 0=0.0%, 4=94.0%, 8=6.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.720 issued rwts: total=7924,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:17.720 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:17.720 filename1: (groupid=0, jobs=1): err= 0: pid=90817: Mon Jul 22 18:37:28 2024 00:30:17.720 read: IOPS=1485, BW=11.6MiB/s (12.2MB/s)(58.1MiB/5002msec) 00:30:17.720 slat (nsec): min=5540, max=57500, avg=19415.19, stdev=5367.46 00:30:17.720 clat (usec): min=1382, max=12203, avg=5309.96, stdev=709.69 00:30:17.720 lat (usec): min=1401, max=12216, avg=5329.37, stdev=709.54 00:30:17.720 clat percentiles (usec): 00:30:17.720 | 1.00th=[ 2606], 5.00th=[ 4293], 10.00th=[ 4817], 20.00th=[ 5145], 00:30:17.720 | 30.00th=[ 5145], 40.00th=[ 5211], 50.00th=[ 5276], 60.00th=[ 5276], 00:30:17.720 | 70.00th=[ 5342], 80.00th=[ 5604], 90.00th=[ 5800], 95.00th=[ 6390], 00:30:17.720 | 99.00th=[ 8094], 99.50th=[ 8356], 99.90th=[10552], 99.95th=[11469], 00:30:17.720 | 99.99th=[12256] 00:30:17.720 bw ( KiB/s): min=10949, max=12976, per=23.21%, avg=11874.33, stdev=547.42, samples=9 00:30:17.720 iops : min= 1368, max= 1622, avg=1484.22, stdev=68.56, samples=9 00:30:17.720 lat (msec) : 2=0.08%, 4=1.67%, 10=98.05%, 20=0.20% 00:30:17.720 cpu : usr=91.76%, sys=7.18%, ctx=8, majf=0, minf=1073 00:30:17.720 IO depths : 1=0.1%, 2=20.8%, 4=53.2%, 8=26.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:17.720 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.720 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.720 issued rwts: total=7431,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:17.720 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:17.720 00:30:17.720 Run status group 0 (all jobs): 00:30:17.720 READ: bw=50.0MiB/s (52.4MB/s), 11.6MiB/s-14.4MiB/s (12.2MB/s-15.1MB/s), io=250MiB (262MB), run=5001-5003msec 00:30:18.288 ----------------------------------------------------- 00:30:18.288 Suppressions used: 00:30:18.288 count bytes template 00:30:18.288 6 52 /usr/src/fio/parse.c 00:30:18.288 1 8 libtcmalloc_minimal.so 00:30:18.288 1 904 libcrypto.so 00:30:18.288 ----------------------------------------------------- 00:30:18.288 00:30:18.288 18:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:30:18.288 18:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:18.288 18:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:18.288 18:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:18.288 18:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:18.288 18:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:18.288 18:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:18.288 18:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:18.288 18:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:18.288 18:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:18.288 18:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:18.288 18:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:18.288 18:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:18.288 18:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:18.288 18:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:18.288 18:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:30:18.288 18:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:18.288 18:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:18.288 18:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:18.288 18:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:18.288 18:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:18.288 18:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:18.288 18:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:18.288 18:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:18.288 00:30:18.288 real 0m28.108s 00:30:18.288 user 2m6.255s 00:30:18.288 sys 0m9.622s 00:30:18.288 18:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:18.288 18:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:18.288 ************************************ 00:30:18.288 END TEST fio_dif_rand_params 00:30:18.288 ************************************ 00:30:18.547 18:37:30 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:30:18.547 18:37:30 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:30:18.547 18:37:30 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:18.547 18:37:30 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:18.547 18:37:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:18.547 ************************************ 00:30:18.547 START TEST fio_dif_digest 00:30:18.547 ************************************ 00:30:18.547 18:37:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:30:18.547 18:37:30 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:30:18.547 18:37:30 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:30:18.547 18:37:30 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:30:18.547 18:37:30 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:30:18.547 18:37:30 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:30:18.547 18:37:30 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:30:18.547 18:37:30 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:30:18.547 18:37:30 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:30:18.547 18:37:30 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:30:18.547 18:37:30 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:30:18.547 18:37:30 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:30:18.547 18:37:30 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:30:18.547 18:37:30 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:30:18.547 18:37:30 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:30:18.547 18:37:30 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:30:18.547 18:37:30 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:18.547 18:37:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:18.547 18:37:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:18.547 bdev_null0 00:30:18.547 18:37:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:18.547 18:37:30 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:18.547 18:37:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:18.547 18:37:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:18.547 18:37:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:18.547 18:37:30 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:18.547 18:37:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:18.547 18:37:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:18.547 18:37:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:18.547 18:37:30 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:18.547 18:37:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:18.547 18:37:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:18.547 [2024-07-22 18:37:30.383339] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:18.547 18:37:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:18.547 18:37:30 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:30:18.547 18:37:30 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:30:18.547 18:37:30 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:18.547 18:37:30 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:30:18.547 18:37:30 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:30:18.547 18:37:30 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:18.547 18:37:30 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:18.547 { 00:30:18.547 "params": { 00:30:18.547 "name": "Nvme$subsystem", 00:30:18.547 "trtype": "$TEST_TRANSPORT", 00:30:18.547 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:18.547 "adrfam": "ipv4", 00:30:18.547 "trsvcid": "$NVMF_PORT", 00:30:18.547 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:18.547 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:18.547 "hdgst": ${hdgst:-false}, 00:30:18.547 "ddgst": ${ddgst:-false} 00:30:18.547 }, 00:30:18.547 "method": "bdev_nvme_attach_controller" 00:30:18.547 } 00:30:18.547 EOF 00:30:18.547 )") 00:30:18.547 18:37:30 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:18.547 18:37:30 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:30:18.547 18:37:30 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:30:18.547 18:37:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:18.547 18:37:30 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:30:18.547 18:37:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:18.547 18:37:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:18.547 18:37:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:18.547 18:37:30 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:30:18.547 18:37:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:18.547 18:37:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:30:18.547 18:37:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:18.547 18:37:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:18.548 18:37:30 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:30:18.548 18:37:30 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:30:18.548 18:37:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:18.548 18:37:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:30:18.548 18:37:30 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:30:18.548 18:37:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:18.548 18:37:30 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:30:18.548 18:37:30 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:18.548 "params": { 00:30:18.548 "name": "Nvme0", 00:30:18.548 "trtype": "tcp", 00:30:18.548 "traddr": "10.0.0.2", 00:30:18.548 "adrfam": "ipv4", 00:30:18.548 "trsvcid": "4420", 00:30:18.548 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:18.548 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:18.548 "hdgst": true, 00:30:18.548 "ddgst": true 00:30:18.548 }, 00:30:18.548 "method": "bdev_nvme_attach_controller" 00:30:18.548 }' 00:30:18.548 18:37:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:30:18.548 18:37:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:30:18.548 18:37:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # break 00:30:18.548 18:37:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:18.548 18:37:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:18.806 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:18.806 ... 00:30:18.806 fio-3.35 00:30:18.806 Starting 3 threads 00:30:31.016 00:30:31.016 filename0: (groupid=0, jobs=1): err= 0: pid=90929: Mon Jul 22 18:37:41 2024 00:30:31.016 read: IOPS=188, BW=23.5MiB/s (24.7MB/s)(236MiB/10010msec) 00:30:31.016 slat (usec): min=10, max=128, avg=27.11, stdev=14.27 00:30:31.016 clat (usec): min=11771, max=18421, avg=15870.24, stdev=316.81 00:30:31.016 lat (usec): min=11785, max=18458, avg=15897.35, stdev=319.27 00:30:31.016 clat percentiles (usec): 00:30:31.016 | 1.00th=[15533], 5.00th=[15533], 10.00th=[15533], 20.00th=[15664], 00:30:31.016 | 30.00th=[15795], 40.00th=[15795], 50.00th=[15926], 60.00th=[15926], 00:30:31.016 | 70.00th=[15926], 80.00th=[16057], 90.00th=[16188], 95.00th=[16188], 00:30:31.016 | 99.00th=[16712], 99.50th=[17433], 99.90th=[18482], 99.95th=[18482], 00:30:31.016 | 99.99th=[18482] 00:30:31.016 bw ( KiB/s): min=23808, max=25344, per=33.41%, avg=24131.37, stdev=466.16, samples=19 00:30:31.016 iops : min= 186, max= 198, avg=188.53, stdev= 3.64, samples=19 00:30:31.016 lat (msec) : 20=100.00% 00:30:31.016 cpu : usr=92.34%, sys=7.05%, ctx=17, majf=0, minf=1072 00:30:31.016 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:31.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:31.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:31.016 issued rwts: total=1884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:31.016 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:31.016 filename0: (groupid=0, jobs=1): err= 0: pid=90930: Mon Jul 22 18:37:41 2024 00:30:31.016 read: IOPS=188, BW=23.5MiB/s (24.6MB/s)(235MiB/10004msec) 00:30:31.016 slat (nsec): min=4997, max=73654, avg=26290.72, stdev=13059.23 00:30:31.016 clat (usec): min=15321, max=24138, avg=15888.86, stdev=425.34 00:30:31.016 lat (usec): min=15337, max=24184, avg=15915.15, stdev=427.94 00:30:31.016 clat percentiles (usec): 00:30:31.016 | 1.00th=[15401], 5.00th=[15533], 10.00th=[15533], 20.00th=[15664], 00:30:31.016 | 30.00th=[15795], 40.00th=[15795], 50.00th=[15926], 60.00th=[15926], 00:30:31.016 | 70.00th=[15926], 80.00th=[16057], 90.00th=[16188], 95.00th=[16319], 00:30:31.016 | 99.00th=[16712], 99.50th=[17695], 99.90th=[24249], 99.95th=[24249], 00:30:31.016 | 99.99th=[24249] 00:30:31.016 bw ( KiB/s): min=23040, max=24576, per=33.36%, avg=24090.95, stdev=458.70, samples=19 00:30:31.016 iops : min= 180, max= 192, avg=188.21, stdev= 3.58, samples=19 00:30:31.016 lat (msec) : 20=99.84%, 50=0.16% 00:30:31.016 cpu : usr=92.97%, sys=6.39%, ctx=14, majf=0, minf=1074 00:30:31.016 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:31.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:31.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:31.016 issued rwts: total=1881,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:31.016 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:31.016 filename0: (groupid=0, jobs=1): err= 0: pid=90931: Mon Jul 22 18:37:41 2024 00:30:31.016 read: IOPS=188, BW=23.5MiB/s (24.7MB/s)(236MiB/10012msec) 00:30:31.016 slat (nsec): min=9322, max=73855, avg=27410.56, stdev=12729.78 00:30:31.016 clat (usec): min=11727, max=20180, avg=15874.43, stdev=356.34 00:30:31.016 lat (usec): min=11737, max=20241, avg=15901.85, stdev=359.50 00:30:31.016 clat percentiles (usec): 00:30:31.016 | 1.00th=[15401], 5.00th=[15533], 10.00th=[15533], 20.00th=[15664], 00:30:31.016 | 30.00th=[15795], 40.00th=[15795], 50.00th=[15926], 60.00th=[15926], 00:30:31.016 | 70.00th=[15926], 80.00th=[16057], 90.00th=[16188], 95.00th=[16319], 00:30:31.016 | 99.00th=[16712], 99.50th=[17695], 99.90th=[20055], 99.95th=[20055], 00:30:31.016 | 99.99th=[20055] 00:30:31.016 bw ( KiB/s): min=23040, max=24576, per=33.36%, avg=24090.95, stdev=458.70, samples=19 00:30:31.016 iops : min= 180, max= 192, avg=188.21, stdev= 3.58, samples=19 00:30:31.016 lat (msec) : 20=99.84%, 50=0.16% 00:30:31.016 cpu : usr=92.68%, sys=6.71%, ctx=14, majf=0, minf=1074 00:30:31.016 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:31.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:31.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:31.016 issued rwts: total=1884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:31.016 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:31.016 00:30:31.016 Run status group 0 (all jobs): 00:30:31.016 READ: bw=70.5MiB/s (74.0MB/s), 23.5MiB/s-23.5MiB/s (24.6MB/s-24.7MB/s), io=706MiB (740MB), run=10004-10012msec 00:30:31.016 ----------------------------------------------------- 00:30:31.016 Suppressions used: 00:30:31.016 count bytes template 00:30:31.016 5 44 /usr/src/fio/parse.c 00:30:31.016 1 8 libtcmalloc_minimal.so 00:30:31.016 1 904 libcrypto.so 00:30:31.016 ----------------------------------------------------- 00:30:31.016 00:30:31.016 18:37:42 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:30:31.016 18:37:42 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:30:31.016 18:37:42 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:30:31.016 18:37:42 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:31.016 18:37:42 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:30:31.016 18:37:42 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:31.016 18:37:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.016 18:37:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:31.016 18:37:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.016 18:37:42 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:31.016 18:37:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.016 18:37:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:31.017 18:37:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.017 00:30:31.017 real 0m12.541s 00:30:31.017 user 0m29.891s 00:30:31.017 sys 0m2.452s 00:30:31.017 18:37:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:31.017 18:37:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:31.017 ************************************ 00:30:31.017 END TEST fio_dif_digest 00:30:31.017 ************************************ 00:30:31.017 18:37:42 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:30:31.017 18:37:42 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:30:31.017 18:37:42 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:30:31.017 18:37:42 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:31.017 18:37:42 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:30:31.017 18:37:42 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:31.017 18:37:42 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:30:31.017 18:37:42 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:31.017 18:37:42 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:31.017 rmmod nvme_tcp 00:30:31.017 rmmod nvme_fabrics 00:30:31.017 rmmod nvme_keyring 00:30:31.017 18:37:43 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:31.275 18:37:43 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:30:31.275 18:37:43 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:30:31.275 18:37:43 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 90174 ']' 00:30:31.275 18:37:43 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 90174 00:30:31.275 18:37:43 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 90174 ']' 00:30:31.275 18:37:43 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 90174 00:30:31.275 18:37:43 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:30:31.275 18:37:43 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:31.275 18:37:43 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90174 00:30:31.275 18:37:43 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:31.275 18:37:43 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:31.275 killing process with pid 90174 00:30:31.275 18:37:43 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90174' 00:30:31.275 18:37:43 nvmf_dif -- common/autotest_common.sh@967 -- # kill 90174 00:30:31.275 18:37:43 nvmf_dif -- common/autotest_common.sh@972 -- # wait 90174 00:30:32.664 18:37:44 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:30:32.664 18:37:44 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:32.664 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:32.664 Waiting for block devices as requested 00:30:32.664 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:30:32.922 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:30:32.922 18:37:44 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:32.922 18:37:44 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:32.922 18:37:44 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:32.922 18:37:44 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:32.922 18:37:44 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:32.922 18:37:44 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:32.922 18:37:44 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:32.922 18:37:44 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:30:32.922 00:30:32.922 real 1m9.890s 00:30:32.922 user 4m6.052s 00:30:32.922 sys 0m20.245s 00:30:32.922 18:37:44 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:32.922 18:37:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:32.922 ************************************ 00:30:32.922 END TEST nvmf_dif 00:30:32.922 ************************************ 00:30:32.922 18:37:44 -- common/autotest_common.sh@1142 -- # return 0 00:30:32.922 18:37:44 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:30:32.922 18:37:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:32.922 18:37:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:32.922 18:37:44 -- common/autotest_common.sh@10 -- # set +x 00:30:32.922 ************************************ 00:30:32.922 START TEST nvmf_abort_qd_sizes 00:30:32.922 ************************************ 00:30:32.922 18:37:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:30:33.181 * Looking for test storage... 00:30:33.181 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:30:33.181 18:37:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:33.181 18:37:44 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:30:33.181 18:37:44 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:33.181 18:37:44 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:33.181 18:37:44 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:33.181 18:37:44 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:33.181 18:37:44 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:33.181 18:37:44 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:33.181 18:37:44 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:33.181 18:37:44 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:33.181 18:37:44 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:33.181 18:37:44 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:33.181 18:37:44 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:30:33.181 18:37:44 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=1e224894-a0fc-4112-b81b-a37606f50c96 00:30:33.181 18:37:44 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:33.181 18:37:44 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:33.181 18:37:44 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:33.181 18:37:44 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:33.181 18:37:44 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:33.181 18:37:44 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:33.181 18:37:44 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:33.181 18:37:44 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:33.181 18:37:44 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.181 18:37:44 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.181 18:37:44 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.181 18:37:44 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:30:33.181 18:37:44 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.181 18:37:44 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:30:33.181 18:37:44 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:33.181 18:37:44 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:33.181 18:37:44 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:33.181 18:37:44 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:33.181 18:37:44 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:33.181 18:37:44 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:33.181 18:37:44 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:33.181 18:37:44 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:33.181 18:37:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:30:33.181 18:37:44 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:33.181 18:37:44 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:33.181 18:37:44 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:33.181 18:37:44 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:33.181 18:37:44 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:33.181 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:33.181 18:37:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:33.181 18:37:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:33.181 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:30:33.181 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:30:33.181 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:30:33.181 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:30:33.181 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:30:33.181 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:30:33.181 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:33.182 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:33.182 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:30:33.182 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:30:33.182 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:33.182 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:33.182 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:33.182 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:33.182 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:33.182 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:33.182 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:33.182 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:33.182 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:30:33.182 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:30:33.182 Cannot find device "nvmf_tgt_br" 00:30:33.182 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:30:33.182 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:30:33.182 Cannot find device "nvmf_tgt_br2" 00:30:33.182 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:30:33.182 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:30:33.182 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:30:33.182 Cannot find device "nvmf_tgt_br" 00:30:33.182 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:30:33.182 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:30:33.182 Cannot find device "nvmf_tgt_br2" 00:30:33.182 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:30:33.182 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:30:33.182 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:30:33.182 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:33.182 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:33.182 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:30:33.182 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:33.182 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:33.182 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:30:33.182 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:30:33.182 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:33.182 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:33.182 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:33.182 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:33.182 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:33.440 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:33.440 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:30:33.440 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:30:33.440 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:30:33.440 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:30:33.440 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:30:33.440 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:30:33.440 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:33.440 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:33.440 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:33.440 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:30:33.440 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:30:33.440 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:30:33.440 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:33.440 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:33.440 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:33.440 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:33.440 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:30:33.440 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:33.440 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:30:33.440 00:30:33.440 --- 10.0.0.2 ping statistics --- 00:30:33.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:33.440 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:30:33.440 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:30:33.440 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:33.440 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:30:33.440 00:30:33.440 --- 10.0.0.3 ping statistics --- 00:30:33.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:33.440 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:30:33.440 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:33.440 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:33.440 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:30:33.440 00:30:33.440 --- 10.0.0.1 ping statistics --- 00:30:33.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:33.440 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:30:33.440 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:33.440 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:30:33.440 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:30:33.440 18:37:45 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:34.008 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:34.267 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:30:34.267 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:30:34.267 18:37:46 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:34.267 18:37:46 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:34.267 18:37:46 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:34.267 18:37:46 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:34.267 18:37:46 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:34.267 18:37:46 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:34.267 18:37:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:30:34.267 18:37:46 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:34.267 18:37:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:34.267 18:37:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:34.267 18:37:46 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=91532 00:30:34.267 18:37:46 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:30:34.267 18:37:46 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 91532 00:30:34.267 18:37:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 91532 ']' 00:30:34.267 18:37:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:34.267 18:37:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:34.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:34.267 18:37:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:34.267 18:37:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:34.267 18:37:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:34.525 [2024-07-22 18:37:46.300445] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:30:34.525 [2024-07-22 18:37:46.300638] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:34.525 [2024-07-22 18:37:46.484952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:35.091 [2024-07-22 18:37:46.814047] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:35.091 [2024-07-22 18:37:46.814123] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:35.091 [2024-07-22 18:37:46.814145] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:35.091 [2024-07-22 18:37:46.814164] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:35.091 [2024-07-22 18:37:46.814183] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:35.091 [2024-07-22 18:37:46.814379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:35.091 [2024-07-22 18:37:46.814790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:35.091 [2024-07-22 18:37:46.815454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:35.091 [2024-07-22 18:37:46.815473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:35.091 [2024-07-22 18:37:47.048771] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:35.350 18:37:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:35.350 ************************************ 00:30:35.350 START TEST spdk_target_abort 00:30:35.350 ************************************ 00:30:35.350 18:37:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:30:35.350 18:37:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:30:35.350 18:37:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:30:35.350 18:37:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.350 18:37:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:35.609 spdk_targetn1 00:30:35.609 18:37:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.609 18:37:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:35.609 18:37:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.609 18:37:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:35.609 [2024-07-22 18:37:47.372239] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:35.609 18:37:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.609 18:37:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:30:35.609 18:37:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.609 18:37:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:35.609 18:37:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.609 18:37:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:30:35.609 18:37:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.609 18:37:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:35.609 18:37:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.609 18:37:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:30:35.609 18:37:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.609 18:37:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:35.609 [2024-07-22 18:37:47.410075] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:35.609 18:37:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.609 18:37:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:30:35.609 18:37:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:30:35.609 18:37:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:30:35.609 18:37:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:30:35.609 18:37:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:30:35.609 18:37:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:30:35.609 18:37:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:30:35.609 18:37:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:30:35.609 18:37:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:30:35.610 18:37:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:35.610 18:37:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:30:35.610 18:37:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:35.610 18:37:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:30:35.610 18:37:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:35.610 18:37:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:30:35.610 18:37:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:35.610 18:37:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:35.610 18:37:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:35.610 18:37:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:35.610 18:37:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:35.610 18:37:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:38.891 Initializing NVMe Controllers 00:30:38.891 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:38.891 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:38.891 Initialization complete. Launching workers. 00:30:38.891 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8814, failed: 0 00:30:38.891 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1014, failed to submit 7800 00:30:38.891 success 792, unsuccess 222, failed 0 00:30:38.891 18:37:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:38.891 18:37:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:43.087 Initializing NVMe Controllers 00:30:43.087 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:43.087 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:43.087 Initialization complete. Launching workers. 00:30:43.087 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8828, failed: 0 00:30:43.087 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1169, failed to submit 7659 00:30:43.087 success 358, unsuccess 811, failed 0 00:30:43.087 18:37:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:43.087 18:37:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:45.620 Initializing NVMe Controllers 00:30:45.620 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:45.620 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:45.620 Initialization complete. Launching workers. 00:30:45.620 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 27020, failed: 0 00:30:45.620 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2240, failed to submit 24780 00:30:45.620 success 271, unsuccess 1969, failed 0 00:30:45.620 18:37:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:30:45.620 18:37:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:45.620 18:37:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:45.620 18:37:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:45.620 18:37:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:30:45.620 18:37:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:45.620 18:37:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:46.189 18:37:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:46.189 18:37:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 91532 00:30:46.189 18:37:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 91532 ']' 00:30:46.189 18:37:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 91532 00:30:46.189 18:37:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:30:46.189 18:37:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:46.189 18:37:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91532 00:30:46.189 18:37:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:46.189 18:37:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:46.189 killing process with pid 91532 00:30:46.189 18:37:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91532' 00:30:46.189 18:37:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 91532 00:30:46.189 18:37:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 91532 00:30:47.127 00:30:47.127 real 0m11.838s 00:30:47.127 user 0m45.527s 00:30:47.127 sys 0m2.281s 00:30:47.127 18:37:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:47.127 18:37:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:47.127 ************************************ 00:30:47.127 END TEST spdk_target_abort 00:30:47.127 ************************************ 00:30:47.386 18:37:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:30:47.386 18:37:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:30:47.386 18:37:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:47.386 18:37:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:47.386 18:37:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:47.386 ************************************ 00:30:47.386 START TEST kernel_target_abort 00:30:47.386 ************************************ 00:30:47.386 18:37:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:30:47.386 18:37:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:30:47.386 18:37:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:30:47.386 18:37:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:47.386 18:37:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:47.386 18:37:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:47.386 18:37:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:47.386 18:37:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:47.386 18:37:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:47.386 18:37:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:47.386 18:37:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:47.386 18:37:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:47.386 18:37:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:30:47.386 18:37:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:30:47.386 18:37:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:30:47.386 18:37:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:47.386 18:37:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:47.386 18:37:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:30:47.386 18:37:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:30:47.386 18:37:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:30:47.386 18:37:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:30:47.386 18:37:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:30:47.386 18:37:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:47.645 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:47.645 Waiting for block devices as requested 00:30:47.645 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:30:47.903 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:30:48.162 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:30:48.162 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:30:48.162 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:30:48.162 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:30:48.162 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:48.162 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:30:48.162 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:30:48.162 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:30:48.162 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:30:48.162 No valid GPT data, bailing 00:30:48.162 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:30:48.162 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:30:48.162 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:30:48.162 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:30:48.162 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:30:48.162 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:30:48.162 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:30:48.162 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:30:48.162 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:30:48.162 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:30:48.162 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:30:48.162 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:30:48.162 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:30:48.162 No valid GPT data, bailing 00:30:48.162 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:30:48.162 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:30:48.162 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:30:48.162 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:30:48.162 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:30:48.162 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:30:48.162 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:30:48.162 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:30:48.162 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:30:48.162 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:30:48.162 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:30:48.162 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:30:48.162 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:30:48.421 No valid GPT data, bailing 00:30:48.421 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:30:48.421 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:30:48.421 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:30:48.421 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:30:48.421 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:30:48.421 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:30:48.421 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:30:48.421 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:30:48.421 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:30:48.421 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:30:48.421 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:30:48.421 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:30:48.421 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:30:48.421 No valid GPT data, bailing 00:30:48.421 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:30:48.421 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:30:48.421 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:30:48.421 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:30:48.422 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:30:48.422 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:48.422 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:48.422 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:30:48.422 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:30:48.422 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:30:48.422 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:30:48.422 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:30:48.422 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:30:48.422 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:30:48.422 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:30:48.422 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:30:48.422 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:30:48.422 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 --hostid=1e224894-a0fc-4112-b81b-a37606f50c96 -a 10.0.0.1 -t tcp -s 4420 00:30:48.422 00:30:48.422 Discovery Log Number of Records 2, Generation counter 2 00:30:48.422 =====Discovery Log Entry 0====== 00:30:48.422 trtype: tcp 00:30:48.422 adrfam: ipv4 00:30:48.422 subtype: current discovery subsystem 00:30:48.422 treq: not specified, sq flow control disable supported 00:30:48.422 portid: 1 00:30:48.422 trsvcid: 4420 00:30:48.422 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:30:48.422 traddr: 10.0.0.1 00:30:48.422 eflags: none 00:30:48.422 sectype: none 00:30:48.422 =====Discovery Log Entry 1====== 00:30:48.422 trtype: tcp 00:30:48.422 adrfam: ipv4 00:30:48.422 subtype: nvme subsystem 00:30:48.422 treq: not specified, sq flow control disable supported 00:30:48.422 portid: 1 00:30:48.422 trsvcid: 4420 00:30:48.422 subnqn: nqn.2016-06.io.spdk:testnqn 00:30:48.422 traddr: 10.0.0.1 00:30:48.422 eflags: none 00:30:48.422 sectype: none 00:30:48.422 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:30:48.422 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:30:48.422 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:30:48.422 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:30:48.422 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:30:48.422 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:30:48.422 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:30:48.422 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:30:48.422 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:30:48.422 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:48.422 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:30:48.422 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:48.422 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:30:48.422 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:48.422 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:30:48.422 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:48.422 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:30:48.422 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:48.422 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:48.422 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:48.422 18:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:51.890 Initializing NVMe Controllers 00:30:51.890 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:51.890 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:51.890 Initialization complete. Launching workers. 00:30:51.890 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 24868, failed: 0 00:30:51.890 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24868, failed to submit 0 00:30:51.890 success 0, unsuccess 24868, failed 0 00:30:51.890 18:38:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:51.890 18:38:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:55.181 Initializing NVMe Controllers 00:30:55.181 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:55.181 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:55.181 Initialization complete. Launching workers. 00:30:55.181 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 53767, failed: 0 00:30:55.181 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 22820, failed to submit 30947 00:30:55.181 success 0, unsuccess 22820, failed 0 00:30:55.181 18:38:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:55.181 18:38:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:58.464 Initializing NVMe Controllers 00:30:58.464 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:58.464 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:58.464 Initialization complete. Launching workers. 00:30:58.464 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 61574, failed: 0 00:30:58.464 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 15378, failed to submit 46196 00:30:58.464 success 0, unsuccess 15378, failed 0 00:30:58.464 18:38:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:30:58.464 18:38:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:30:58.464 18:38:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:30:58.464 18:38:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:58.464 18:38:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:58.464 18:38:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:30:58.464 18:38:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:58.464 18:38:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:30:58.464 18:38:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:30:58.464 18:38:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:59.036 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:59.605 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:30:59.863 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:30:59.863 00:30:59.863 real 0m12.553s 00:30:59.863 user 0m6.773s 00:30:59.863 sys 0m3.503s 00:30:59.863 18:38:11 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:59.863 18:38:11 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:59.863 ************************************ 00:30:59.863 END TEST kernel_target_abort 00:30:59.863 ************************************ 00:30:59.863 18:38:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:30:59.863 18:38:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:59.864 18:38:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:30:59.864 18:38:11 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:59.864 18:38:11 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:30:59.864 18:38:11 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:59.864 18:38:11 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:30:59.864 18:38:11 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:59.864 18:38:11 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:59.864 rmmod nvme_tcp 00:30:59.864 rmmod nvme_fabrics 00:30:59.864 rmmod nvme_keyring 00:30:59.864 18:38:11 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:59.864 18:38:11 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:30:59.864 18:38:11 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:30:59.864 18:38:11 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 91532 ']' 00:30:59.864 18:38:11 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 91532 00:30:59.864 18:38:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 91532 ']' 00:30:59.864 18:38:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 91532 00:30:59.864 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (91532) - No such process 00:30:59.864 Process with pid 91532 is not found 00:30:59.864 18:38:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 91532 is not found' 00:30:59.864 18:38:11 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:30:59.864 18:38:11 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:00.430 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:00.430 Waiting for block devices as requested 00:31:00.430 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:31:00.430 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:31:00.430 18:38:12 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:00.430 18:38:12 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:00.430 18:38:12 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:00.430 18:38:12 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:00.430 18:38:12 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:00.430 18:38:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:00.430 18:38:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:00.690 18:38:12 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:31:00.690 00:31:00.690 real 0m27.564s 00:31:00.690 user 0m53.413s 00:31:00.690 sys 0m7.088s 00:31:00.690 18:38:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:00.690 18:38:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:00.690 ************************************ 00:31:00.690 END TEST nvmf_abort_qd_sizes 00:31:00.690 ************************************ 00:31:00.690 18:38:12 -- common/autotest_common.sh@1142 -- # return 0 00:31:00.690 18:38:12 -- spdk/autotest.sh@295 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:31:00.690 18:38:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:00.690 18:38:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:00.690 18:38:12 -- common/autotest_common.sh@10 -- # set +x 00:31:00.690 ************************************ 00:31:00.690 START TEST keyring_file 00:31:00.690 ************************************ 00:31:00.690 18:38:12 keyring_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:31:00.690 * Looking for test storage... 00:31:00.690 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:31:00.690 18:38:12 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:31:00.690 18:38:12 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:00.690 18:38:12 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:31:00.690 18:38:12 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:00.690 18:38:12 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:00.690 18:38:12 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:00.690 18:38:12 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:00.690 18:38:12 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:00.690 18:38:12 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:00.690 18:38:12 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:00.690 18:38:12 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:00.690 18:38:12 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:00.690 18:38:12 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:00.690 18:38:12 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:31:00.690 18:38:12 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=1e224894-a0fc-4112-b81b-a37606f50c96 00:31:00.690 18:38:12 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:00.690 18:38:12 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:00.690 18:38:12 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:00.690 18:38:12 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:00.690 18:38:12 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:00.690 18:38:12 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:00.690 18:38:12 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:00.690 18:38:12 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:00.690 18:38:12 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:00.690 18:38:12 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:00.690 18:38:12 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:00.690 18:38:12 keyring_file -- paths/export.sh@5 -- # export PATH 00:31:00.690 18:38:12 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:00.690 18:38:12 keyring_file -- nvmf/common.sh@47 -- # : 0 00:31:00.690 18:38:12 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:00.690 18:38:12 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:00.690 18:38:12 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:00.690 18:38:12 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:00.690 18:38:12 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:00.690 18:38:12 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:00.690 18:38:12 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:00.690 18:38:12 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:00.690 18:38:12 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:31:00.690 18:38:12 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:31:00.690 18:38:12 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:31:00.690 18:38:12 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:31:00.690 18:38:12 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:31:00.690 18:38:12 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:31:00.690 18:38:12 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:31:00.690 18:38:12 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:00.690 18:38:12 keyring_file -- keyring/common.sh@17 -- # name=key0 00:31:00.690 18:38:12 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:00.690 18:38:12 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:00.690 18:38:12 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:00.690 18:38:12 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.BRbYf23VLI 00:31:00.690 18:38:12 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:00.690 18:38:12 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:00.690 18:38:12 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:31:00.690 18:38:12 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:00.690 18:38:12 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:31:00.690 18:38:12 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:31:00.690 18:38:12 keyring_file -- nvmf/common.sh@705 -- # python - 00:31:00.690 18:38:12 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.BRbYf23VLI 00:31:00.690 18:38:12 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.BRbYf23VLI 00:31:00.690 18:38:12 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.BRbYf23VLI 00:31:00.690 18:38:12 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:31:00.690 18:38:12 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:00.690 18:38:12 keyring_file -- keyring/common.sh@17 -- # name=key1 00:31:00.690 18:38:12 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:31:00.690 18:38:12 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:00.690 18:38:12 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:00.690 18:38:12 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.X0DpmWcRLL 00:31:00.690 18:38:12 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:31:00.690 18:38:12 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:31:00.690 18:38:12 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:31:00.690 18:38:12 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:00.690 18:38:12 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:31:00.690 18:38:12 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:31:00.690 18:38:12 keyring_file -- nvmf/common.sh@705 -- # python - 00:31:00.949 18:38:12 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.X0DpmWcRLL 00:31:00.949 18:38:12 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.X0DpmWcRLL 00:31:00.949 18:38:12 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.X0DpmWcRLL 00:31:00.949 18:38:12 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:00.949 18:38:12 keyring_file -- keyring/file.sh@30 -- # tgtpid=92507 00:31:00.949 18:38:12 keyring_file -- keyring/file.sh@32 -- # waitforlisten 92507 00:31:00.949 18:38:12 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 92507 ']' 00:31:00.949 18:38:12 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:00.949 18:38:12 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:00.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:00.949 18:38:12 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:00.949 18:38:12 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:00.949 18:38:12 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:00.949 [2024-07-22 18:38:12.835289] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:31:00.949 [2024-07-22 18:38:12.835444] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92507 ] 00:31:01.208 [2024-07-22 18:38:13.003897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:01.472 [2024-07-22 18:38:13.290535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:01.731 [2024-07-22 18:38:13.506928] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:31:02.300 18:38:14 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:02.300 18:38:14 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:31:02.300 18:38:14 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:31:02.300 18:38:14 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:02.300 18:38:14 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:02.300 [2024-07-22 18:38:14.116655] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:02.300 null0 00:31:02.300 [2024-07-22 18:38:14.148628] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:02.300 [2024-07-22 18:38:14.148970] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:02.300 [2024-07-22 18:38:14.156625] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:31:02.300 18:38:14 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:02.300 18:38:14 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:02.300 18:38:14 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:31:02.300 18:38:14 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:02.300 18:38:14 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:02.300 18:38:14 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:02.300 18:38:14 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:02.300 18:38:14 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:02.300 18:38:14 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:02.300 18:38:14 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:02.300 18:38:14 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:02.300 [2024-07-22 18:38:14.168644] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:31:02.300 request: 00:31:02.300 { 00:31:02.300 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:31:02.300 "secure_channel": false, 00:31:02.300 "listen_address": { 00:31:02.300 "trtype": "tcp", 00:31:02.300 "traddr": "127.0.0.1", 00:31:02.300 "trsvcid": "4420" 00:31:02.300 }, 00:31:02.300 "method": "nvmf_subsystem_add_listener", 00:31:02.300 "req_id": 1 00:31:02.300 } 00:31:02.300 Got JSON-RPC error response 00:31:02.300 response: 00:31:02.300 { 00:31:02.300 "code": -32602, 00:31:02.300 "message": "Invalid parameters" 00:31:02.300 } 00:31:02.300 18:38:14 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:02.300 18:38:14 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:31:02.300 18:38:14 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:02.300 18:38:14 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:02.300 18:38:14 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:02.300 18:38:14 keyring_file -- keyring/file.sh@46 -- # bperfpid=92530 00:31:02.300 18:38:14 keyring_file -- keyring/file.sh@48 -- # waitforlisten 92530 /var/tmp/bperf.sock 00:31:02.300 18:38:14 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 92530 ']' 00:31:02.300 18:38:14 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:31:02.300 18:38:14 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:02.300 18:38:14 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:02.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:02.300 18:38:14 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:02.300 18:38:14 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:02.300 18:38:14 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:02.300 [2024-07-22 18:38:14.268043] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:31:02.300 [2024-07-22 18:38:14.268257] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92530 ] 00:31:02.565 [2024-07-22 18:38:14.436137] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:02.823 [2024-07-22 18:38:14.708870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:03.081 [2024-07-22 18:38:14.914148] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:31:03.339 18:38:15 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:03.339 18:38:15 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:31:03.339 18:38:15 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.BRbYf23VLI 00:31:03.339 18:38:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.BRbYf23VLI 00:31:03.598 18:38:15 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.X0DpmWcRLL 00:31:03.598 18:38:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.X0DpmWcRLL 00:31:03.856 18:38:15 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:31:03.856 18:38:15 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:31:03.856 18:38:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:03.856 18:38:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:03.856 18:38:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:04.114 18:38:16 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.BRbYf23VLI == \/\t\m\p\/\t\m\p\.\B\R\b\Y\f\2\3\V\L\I ]] 00:31:04.114 18:38:16 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:31:04.114 18:38:16 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:31:04.114 18:38:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:04.114 18:38:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:04.114 18:38:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:04.372 18:38:16 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.X0DpmWcRLL == \/\t\m\p\/\t\m\p\.\X\0\D\p\m\W\c\R\L\L ]] 00:31:04.372 18:38:16 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:31:04.372 18:38:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:04.372 18:38:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:04.372 18:38:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:04.372 18:38:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:04.372 18:38:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:04.631 18:38:16 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:31:04.631 18:38:16 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:31:04.631 18:38:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:04.631 18:38:16 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:04.631 18:38:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:04.631 18:38:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:04.631 18:38:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:05.197 18:38:16 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:31:05.197 18:38:16 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:05.197 18:38:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:05.197 [2024-07-22 18:38:17.123177] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:05.197 nvme0n1 00:31:05.455 18:38:17 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:31:05.455 18:38:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:05.455 18:38:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:05.455 18:38:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:05.455 18:38:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:05.456 18:38:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:05.714 18:38:17 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:31:05.714 18:38:17 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:31:05.714 18:38:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:05.714 18:38:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:05.714 18:38:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:05.714 18:38:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:05.714 18:38:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:05.972 18:38:17 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:31:05.972 18:38:17 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:05.972 Running I/O for 1 seconds... 00:31:06.908 00:31:06.908 Latency(us) 00:31:06.908 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:06.908 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:31:06.908 nvme0n1 : 1.01 7986.71 31.20 0.00 0.00 15950.00 8579.26 55050.24 00:31:06.908 =================================================================================================================== 00:31:06.908 Total : 7986.71 31.20 0.00 0.00 15950.00 8579.26 55050.24 00:31:06.908 0 00:31:06.908 18:38:18 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:06.908 18:38:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:07.476 18:38:19 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:31:07.476 18:38:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:07.476 18:38:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:07.476 18:38:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:07.476 18:38:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:07.476 18:38:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:07.476 18:38:19 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:31:07.476 18:38:19 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:31:07.476 18:38:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:07.476 18:38:19 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:07.476 18:38:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:07.476 18:38:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:07.476 18:38:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:07.734 18:38:19 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:31:07.734 18:38:19 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:07.734 18:38:19 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:31:07.734 18:38:19 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:07.734 18:38:19 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:31:07.734 18:38:19 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:07.734 18:38:19 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:31:07.734 18:38:19 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:07.734 18:38:19 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:07.734 18:38:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:07.994 [2024-07-22 18:38:19.980000] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:31:07.994 [2024-07-22 18:38:19.980061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000030000 (107): Transport endpoint is not connected 00:31:07.994 [2024-07-22 18:38:19.981013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000030000 (9): Bad file descriptor 00:31:07.994 [2024-07-22 18:38:19.982007] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:07.994 [2024-07-22 18:38:19.982043] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:31:07.994 [2024-07-22 18:38:19.982060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:07.994 request: 00:31:07.994 { 00:31:07.994 "name": "nvme0", 00:31:07.994 "trtype": "tcp", 00:31:07.994 "traddr": "127.0.0.1", 00:31:07.994 "adrfam": "ipv4", 00:31:07.994 "trsvcid": "4420", 00:31:07.994 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:07.994 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:07.994 "prchk_reftag": false, 00:31:07.994 "prchk_guard": false, 00:31:07.994 "hdgst": false, 00:31:07.994 "ddgst": false, 00:31:07.994 "psk": "key1", 00:31:07.994 "method": "bdev_nvme_attach_controller", 00:31:07.994 "req_id": 1 00:31:07.994 } 00:31:07.994 Got JSON-RPC error response 00:31:07.994 response: 00:31:07.994 { 00:31:07.994 "code": -5, 00:31:07.994 "message": "Input/output error" 00:31:07.994 } 00:31:07.994 18:38:19 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:31:07.994 18:38:19 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:07.994 18:38:19 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:07.994 18:38:19 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:07.994 18:38:20 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:31:07.994 18:38:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:07.994 18:38:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:07.994 18:38:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:07.994 18:38:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:07.994 18:38:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:08.561 18:38:20 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:31:08.561 18:38:20 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:31:08.561 18:38:20 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:08.562 18:38:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:08.562 18:38:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:08.562 18:38:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:08.562 18:38:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:08.562 18:38:20 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:31:08.562 18:38:20 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:31:08.562 18:38:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:08.821 18:38:20 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:31:08.821 18:38:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:31:09.079 18:38:21 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:31:09.079 18:38:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:09.079 18:38:21 keyring_file -- keyring/file.sh@77 -- # jq length 00:31:09.647 18:38:21 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:31:09.647 18:38:21 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.BRbYf23VLI 00:31:09.647 18:38:21 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.BRbYf23VLI 00:31:09.647 18:38:21 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:31:09.647 18:38:21 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.BRbYf23VLI 00:31:09.647 18:38:21 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:31:09.647 18:38:21 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:09.647 18:38:21 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:31:09.647 18:38:21 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:09.647 18:38:21 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.BRbYf23VLI 00:31:09.647 18:38:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.BRbYf23VLI 00:31:09.647 [2024-07-22 18:38:21.651486] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.BRbYf23VLI': 0100660 00:31:09.647 [2024-07-22 18:38:21.651563] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:31:09.647 request: 00:31:09.647 { 00:31:09.647 "name": "key0", 00:31:09.647 "path": "/tmp/tmp.BRbYf23VLI", 00:31:09.647 "method": "keyring_file_add_key", 00:31:09.647 "req_id": 1 00:31:09.647 } 00:31:09.647 Got JSON-RPC error response 00:31:09.647 response: 00:31:09.647 { 00:31:09.647 "code": -1, 00:31:09.647 "message": "Operation not permitted" 00:31:09.647 } 00:31:09.907 18:38:21 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:31:09.907 18:38:21 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:09.907 18:38:21 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:09.907 18:38:21 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:09.907 18:38:21 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.BRbYf23VLI 00:31:09.907 18:38:21 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.BRbYf23VLI 00:31:09.907 18:38:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.BRbYf23VLI 00:31:10.165 18:38:21 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.BRbYf23VLI 00:31:10.165 18:38:21 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:31:10.165 18:38:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:10.165 18:38:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:10.165 18:38:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:10.165 18:38:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:10.165 18:38:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:10.425 18:38:22 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:31:10.425 18:38:22 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:10.425 18:38:22 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:31:10.425 18:38:22 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:10.425 18:38:22 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:31:10.425 18:38:22 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:10.425 18:38:22 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:31:10.425 18:38:22 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:10.425 18:38:22 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:10.425 18:38:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:10.425 [2024-07-22 18:38:22.399781] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.BRbYf23VLI': No such file or directory 00:31:10.425 [2024-07-22 18:38:22.399848] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:31:10.425 [2024-07-22 18:38:22.399883] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:31:10.425 [2024-07-22 18:38:22.399897] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:10.425 [2024-07-22 18:38:22.399920] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:31:10.425 request: 00:31:10.425 { 00:31:10.425 "name": "nvme0", 00:31:10.425 "trtype": "tcp", 00:31:10.425 "traddr": "127.0.0.1", 00:31:10.425 "adrfam": "ipv4", 00:31:10.425 "trsvcid": "4420", 00:31:10.425 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:10.425 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:10.425 "prchk_reftag": false, 00:31:10.425 "prchk_guard": false, 00:31:10.425 "hdgst": false, 00:31:10.425 "ddgst": false, 00:31:10.425 "psk": "key0", 00:31:10.425 "method": "bdev_nvme_attach_controller", 00:31:10.425 "req_id": 1 00:31:10.425 } 00:31:10.425 Got JSON-RPC error response 00:31:10.425 response: 00:31:10.425 { 00:31:10.425 "code": -19, 00:31:10.425 "message": "No such device" 00:31:10.425 } 00:31:10.425 18:38:22 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:31:10.425 18:38:22 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:10.425 18:38:22 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:10.425 18:38:22 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:10.425 18:38:22 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:31:10.425 18:38:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:10.992 18:38:22 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:31:10.992 18:38:22 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:10.992 18:38:22 keyring_file -- keyring/common.sh@17 -- # name=key0 00:31:10.992 18:38:22 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:10.992 18:38:22 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:10.992 18:38:22 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:10.992 18:38:22 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.FMfKOiCmGN 00:31:10.992 18:38:22 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:10.992 18:38:22 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:10.992 18:38:22 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:31:10.992 18:38:22 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:10.992 18:38:22 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:31:10.992 18:38:22 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:31:10.992 18:38:22 keyring_file -- nvmf/common.sh@705 -- # python - 00:31:10.992 18:38:22 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.FMfKOiCmGN 00:31:10.992 18:38:22 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.FMfKOiCmGN 00:31:10.992 18:38:22 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.FMfKOiCmGN 00:31:10.992 18:38:22 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FMfKOiCmGN 00:31:10.992 18:38:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FMfKOiCmGN 00:31:11.250 18:38:23 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:11.250 18:38:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:11.508 nvme0n1 00:31:11.508 18:38:23 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:31:11.508 18:38:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:11.508 18:38:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:11.508 18:38:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:11.508 18:38:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:11.508 18:38:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:11.766 18:38:23 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:31:11.766 18:38:23 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:31:11.766 18:38:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:12.024 18:38:23 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:31:12.024 18:38:23 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:31:12.024 18:38:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:12.024 18:38:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:12.024 18:38:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:12.281 18:38:24 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:31:12.281 18:38:24 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:31:12.281 18:38:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:12.281 18:38:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:12.281 18:38:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:12.281 18:38:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:12.281 18:38:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:12.539 18:38:24 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:31:12.539 18:38:24 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:12.539 18:38:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:12.797 18:38:24 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:31:12.797 18:38:24 keyring_file -- keyring/file.sh@104 -- # jq length 00:31:12.797 18:38:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:12.797 18:38:24 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:31:12.797 18:38:24 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FMfKOiCmGN 00:31:12.797 18:38:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FMfKOiCmGN 00:31:13.056 18:38:25 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.X0DpmWcRLL 00:31:13.056 18:38:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.X0DpmWcRLL 00:31:13.314 18:38:25 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:13.314 18:38:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:13.881 nvme0n1 00:31:13.881 18:38:25 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:31:13.881 18:38:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:31:14.139 18:38:25 keyring_file -- keyring/file.sh@112 -- # config='{ 00:31:14.139 "subsystems": [ 00:31:14.139 { 00:31:14.139 "subsystem": "keyring", 00:31:14.139 "config": [ 00:31:14.139 { 00:31:14.139 "method": "keyring_file_add_key", 00:31:14.139 "params": { 00:31:14.139 "name": "key0", 00:31:14.139 "path": "/tmp/tmp.FMfKOiCmGN" 00:31:14.139 } 00:31:14.139 }, 00:31:14.139 { 00:31:14.139 "method": "keyring_file_add_key", 00:31:14.139 "params": { 00:31:14.139 "name": "key1", 00:31:14.139 "path": "/tmp/tmp.X0DpmWcRLL" 00:31:14.139 } 00:31:14.139 } 00:31:14.139 ] 00:31:14.139 }, 00:31:14.139 { 00:31:14.139 "subsystem": "iobuf", 00:31:14.139 "config": [ 00:31:14.139 { 00:31:14.139 "method": "iobuf_set_options", 00:31:14.139 "params": { 00:31:14.139 "small_pool_count": 8192, 00:31:14.139 "large_pool_count": 1024, 00:31:14.139 "small_bufsize": 8192, 00:31:14.139 "large_bufsize": 135168 00:31:14.139 } 00:31:14.139 } 00:31:14.139 ] 00:31:14.139 }, 00:31:14.139 { 00:31:14.139 "subsystem": "sock", 00:31:14.139 "config": [ 00:31:14.139 { 00:31:14.139 "method": "sock_set_default_impl", 00:31:14.139 "params": { 00:31:14.139 "impl_name": "uring" 00:31:14.139 } 00:31:14.139 }, 00:31:14.139 { 00:31:14.139 "method": "sock_impl_set_options", 00:31:14.139 "params": { 00:31:14.139 "impl_name": "ssl", 00:31:14.139 "recv_buf_size": 4096, 00:31:14.139 "send_buf_size": 4096, 00:31:14.139 "enable_recv_pipe": true, 00:31:14.139 "enable_quickack": false, 00:31:14.139 "enable_placement_id": 0, 00:31:14.139 "enable_zerocopy_send_server": true, 00:31:14.139 "enable_zerocopy_send_client": false, 00:31:14.139 "zerocopy_threshold": 0, 00:31:14.139 "tls_version": 0, 00:31:14.139 "enable_ktls": false 00:31:14.139 } 00:31:14.139 }, 00:31:14.139 { 00:31:14.139 "method": "sock_impl_set_options", 00:31:14.139 "params": { 00:31:14.139 "impl_name": "posix", 00:31:14.139 "recv_buf_size": 2097152, 00:31:14.139 "send_buf_size": 2097152, 00:31:14.139 "enable_recv_pipe": true, 00:31:14.139 "enable_quickack": false, 00:31:14.139 "enable_placement_id": 0, 00:31:14.139 "enable_zerocopy_send_server": true, 00:31:14.139 "enable_zerocopy_send_client": false, 00:31:14.139 "zerocopy_threshold": 0, 00:31:14.139 "tls_version": 0, 00:31:14.139 "enable_ktls": false 00:31:14.139 } 00:31:14.139 }, 00:31:14.139 { 00:31:14.139 "method": "sock_impl_set_options", 00:31:14.139 "params": { 00:31:14.139 "impl_name": "uring", 00:31:14.139 "recv_buf_size": 2097152, 00:31:14.139 "send_buf_size": 2097152, 00:31:14.140 "enable_recv_pipe": true, 00:31:14.140 "enable_quickack": false, 00:31:14.140 "enable_placement_id": 0, 00:31:14.140 "enable_zerocopy_send_server": false, 00:31:14.140 "enable_zerocopy_send_client": false, 00:31:14.140 "zerocopy_threshold": 0, 00:31:14.140 "tls_version": 0, 00:31:14.140 "enable_ktls": false 00:31:14.140 } 00:31:14.140 } 00:31:14.140 ] 00:31:14.140 }, 00:31:14.140 { 00:31:14.140 "subsystem": "vmd", 00:31:14.140 "config": [] 00:31:14.140 }, 00:31:14.140 { 00:31:14.140 "subsystem": "accel", 00:31:14.140 "config": [ 00:31:14.140 { 00:31:14.140 "method": "accel_set_options", 00:31:14.140 "params": { 00:31:14.140 "small_cache_size": 128, 00:31:14.140 "large_cache_size": 16, 00:31:14.140 "task_count": 2048, 00:31:14.140 "sequence_count": 2048, 00:31:14.140 "buf_count": 2048 00:31:14.140 } 00:31:14.140 } 00:31:14.140 ] 00:31:14.140 }, 00:31:14.140 { 00:31:14.140 "subsystem": "bdev", 00:31:14.140 "config": [ 00:31:14.140 { 00:31:14.140 "method": "bdev_set_options", 00:31:14.140 "params": { 00:31:14.140 "bdev_io_pool_size": 65535, 00:31:14.140 "bdev_io_cache_size": 256, 00:31:14.140 "bdev_auto_examine": true, 00:31:14.140 "iobuf_small_cache_size": 128, 00:31:14.140 "iobuf_large_cache_size": 16 00:31:14.140 } 00:31:14.140 }, 00:31:14.140 { 00:31:14.140 "method": "bdev_raid_set_options", 00:31:14.140 "params": { 00:31:14.140 "process_window_size_kb": 1024, 00:31:14.140 "process_max_bandwidth_mb_sec": 0 00:31:14.140 } 00:31:14.140 }, 00:31:14.140 { 00:31:14.140 "method": "bdev_iscsi_set_options", 00:31:14.140 "params": { 00:31:14.140 "timeout_sec": 30 00:31:14.140 } 00:31:14.140 }, 00:31:14.140 { 00:31:14.140 "method": "bdev_nvme_set_options", 00:31:14.140 "params": { 00:31:14.140 "action_on_timeout": "none", 00:31:14.140 "timeout_us": 0, 00:31:14.140 "timeout_admin_us": 0, 00:31:14.140 "keep_alive_timeout_ms": 10000, 00:31:14.140 "arbitration_burst": 0, 00:31:14.140 "low_priority_weight": 0, 00:31:14.140 "medium_priority_weight": 0, 00:31:14.140 "high_priority_weight": 0, 00:31:14.140 "nvme_adminq_poll_period_us": 10000, 00:31:14.140 "nvme_ioq_poll_period_us": 0, 00:31:14.140 "io_queue_requests": 512, 00:31:14.140 "delay_cmd_submit": true, 00:31:14.140 "transport_retry_count": 4, 00:31:14.140 "bdev_retry_count": 3, 00:31:14.140 "transport_ack_timeout": 0, 00:31:14.140 "ctrlr_loss_timeout_sec": 0, 00:31:14.140 "reconnect_delay_sec": 0, 00:31:14.140 "fast_io_fail_timeout_sec": 0, 00:31:14.140 "disable_auto_failback": false, 00:31:14.140 "generate_uuids": false, 00:31:14.140 "transport_tos": 0, 00:31:14.140 "nvme_error_stat": false, 00:31:14.140 "rdma_srq_size": 0, 00:31:14.140 "io_path_stat": false, 00:31:14.140 "allow_accel_sequence": false, 00:31:14.140 "rdma_max_cq_size": 0, 00:31:14.140 "rdma_cm_event_timeout_ms": 0, 00:31:14.140 "dhchap_digests": [ 00:31:14.140 "sha256", 00:31:14.140 "sha384", 00:31:14.140 "sha512" 00:31:14.140 ], 00:31:14.140 "dhchap_dhgroups": [ 00:31:14.140 "null", 00:31:14.140 "ffdhe2048", 00:31:14.140 "ffdhe3072", 00:31:14.140 "ffdhe4096", 00:31:14.140 "ffdhe6144", 00:31:14.140 "ffdhe8192" 00:31:14.140 ] 00:31:14.140 } 00:31:14.140 }, 00:31:14.140 { 00:31:14.140 "method": "bdev_nvme_attach_controller", 00:31:14.140 "params": { 00:31:14.140 "name": "nvme0", 00:31:14.140 "trtype": "TCP", 00:31:14.140 "adrfam": "IPv4", 00:31:14.140 "traddr": "127.0.0.1", 00:31:14.140 "trsvcid": "4420", 00:31:14.140 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:14.140 "prchk_reftag": false, 00:31:14.140 "prchk_guard": false, 00:31:14.140 "ctrlr_loss_timeout_sec": 0, 00:31:14.140 "reconnect_delay_sec": 0, 00:31:14.140 "fast_io_fail_timeout_sec": 0, 00:31:14.140 "psk": "key0", 00:31:14.140 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:14.140 "hdgst": false, 00:31:14.140 "ddgst": false 00:31:14.140 } 00:31:14.140 }, 00:31:14.140 { 00:31:14.140 "method": "bdev_nvme_set_hotplug", 00:31:14.140 "params": { 00:31:14.140 "period_us": 100000, 00:31:14.140 "enable": false 00:31:14.140 } 00:31:14.140 }, 00:31:14.140 { 00:31:14.140 "method": "bdev_wait_for_examine" 00:31:14.140 } 00:31:14.140 ] 00:31:14.140 }, 00:31:14.140 { 00:31:14.140 "subsystem": "nbd", 00:31:14.140 "config": [] 00:31:14.140 } 00:31:14.140 ] 00:31:14.140 }' 00:31:14.140 18:38:25 keyring_file -- keyring/file.sh@114 -- # killprocess 92530 00:31:14.140 18:38:25 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 92530 ']' 00:31:14.140 18:38:25 keyring_file -- common/autotest_common.sh@952 -- # kill -0 92530 00:31:14.140 18:38:25 keyring_file -- common/autotest_common.sh@953 -- # uname 00:31:14.140 18:38:25 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:14.140 18:38:25 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92530 00:31:14.140 killing process with pid 92530 00:31:14.140 Received shutdown signal, test time was about 1.000000 seconds 00:31:14.140 00:31:14.140 Latency(us) 00:31:14.140 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:14.140 =================================================================================================================== 00:31:14.140 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:14.140 18:38:25 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:14.140 18:38:25 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:14.140 18:38:25 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92530' 00:31:14.140 18:38:25 keyring_file -- common/autotest_common.sh@967 -- # kill 92530 00:31:14.140 18:38:25 keyring_file -- common/autotest_common.sh@972 -- # wait 92530 00:31:15.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:15.513 18:38:27 keyring_file -- keyring/file.sh@117 -- # bperfpid=92792 00:31:15.513 18:38:27 keyring_file -- keyring/file.sh@119 -- # waitforlisten 92792 /var/tmp/bperf.sock 00:31:15.513 18:38:27 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 92792 ']' 00:31:15.513 18:38:27 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:15.513 18:38:27 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:15.513 18:38:27 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:15.513 18:38:27 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:31:15.513 18:38:27 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:15.513 18:38:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:15.513 18:38:27 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:31:15.513 "subsystems": [ 00:31:15.513 { 00:31:15.513 "subsystem": "keyring", 00:31:15.513 "config": [ 00:31:15.513 { 00:31:15.513 "method": "keyring_file_add_key", 00:31:15.513 "params": { 00:31:15.513 "name": "key0", 00:31:15.513 "path": "/tmp/tmp.FMfKOiCmGN" 00:31:15.513 } 00:31:15.513 }, 00:31:15.513 { 00:31:15.513 "method": "keyring_file_add_key", 00:31:15.513 "params": { 00:31:15.513 "name": "key1", 00:31:15.513 "path": "/tmp/tmp.X0DpmWcRLL" 00:31:15.513 } 00:31:15.513 } 00:31:15.513 ] 00:31:15.513 }, 00:31:15.513 { 00:31:15.513 "subsystem": "iobuf", 00:31:15.513 "config": [ 00:31:15.513 { 00:31:15.513 "method": "iobuf_set_options", 00:31:15.513 "params": { 00:31:15.513 "small_pool_count": 8192, 00:31:15.513 "large_pool_count": 1024, 00:31:15.513 "small_bufsize": 8192, 00:31:15.513 "large_bufsize": 135168 00:31:15.513 } 00:31:15.513 } 00:31:15.513 ] 00:31:15.513 }, 00:31:15.513 { 00:31:15.513 "subsystem": "sock", 00:31:15.513 "config": [ 00:31:15.513 { 00:31:15.513 "method": "sock_set_default_impl", 00:31:15.513 "params": { 00:31:15.513 "impl_name": "uring" 00:31:15.513 } 00:31:15.513 }, 00:31:15.513 { 00:31:15.513 "method": "sock_impl_set_options", 00:31:15.513 "params": { 00:31:15.513 "impl_name": "ssl", 00:31:15.513 "recv_buf_size": 4096, 00:31:15.513 "send_buf_size": 4096, 00:31:15.513 "enable_recv_pipe": true, 00:31:15.513 "enable_quickack": false, 00:31:15.513 "enable_placement_id": 0, 00:31:15.513 "enable_zerocopy_send_server": true, 00:31:15.513 "enable_zerocopy_send_client": false, 00:31:15.513 "zerocopy_threshold": 0, 00:31:15.513 "tls_version": 0, 00:31:15.513 "enable_ktls": false 00:31:15.513 } 00:31:15.513 }, 00:31:15.513 { 00:31:15.513 "method": "sock_impl_set_options", 00:31:15.513 "params": { 00:31:15.513 "impl_name": "posix", 00:31:15.513 "recv_buf_size": 2097152, 00:31:15.513 "send_buf_size": 2097152, 00:31:15.513 "enable_recv_pipe": true, 00:31:15.513 "enable_quickack": false, 00:31:15.513 "enable_placement_id": 0, 00:31:15.513 "enable_zerocopy_send_server": true, 00:31:15.513 "enable_zerocopy_send_client": false, 00:31:15.513 "zerocopy_threshold": 0, 00:31:15.513 "tls_version": 0, 00:31:15.513 "enable_ktls": false 00:31:15.513 } 00:31:15.513 }, 00:31:15.513 { 00:31:15.513 "method": "sock_impl_set_options", 00:31:15.513 "params": { 00:31:15.513 "impl_name": "uring", 00:31:15.513 "recv_buf_size": 2097152, 00:31:15.513 "send_buf_size": 2097152, 00:31:15.513 "enable_recv_pipe": true, 00:31:15.513 "enable_quickack": false, 00:31:15.513 "enable_placement_id": 0, 00:31:15.513 "enable_zerocopy_send_server": false, 00:31:15.513 "enable_zerocopy_send_client": false, 00:31:15.513 "zerocopy_threshold": 0, 00:31:15.513 "tls_version": 0, 00:31:15.514 "enable_ktls": false 00:31:15.514 } 00:31:15.514 } 00:31:15.514 ] 00:31:15.514 }, 00:31:15.514 { 00:31:15.514 "subsystem": "vmd", 00:31:15.514 "config": [] 00:31:15.514 }, 00:31:15.514 { 00:31:15.514 "subsystem": "accel", 00:31:15.514 "config": [ 00:31:15.514 { 00:31:15.514 "method": "accel_set_options", 00:31:15.514 "params": { 00:31:15.514 "small_cache_size": 128, 00:31:15.514 "large_cache_size": 16, 00:31:15.514 "task_count": 2048, 00:31:15.514 "sequence_count": 2048, 00:31:15.514 "buf_count": 2048 00:31:15.514 } 00:31:15.514 } 00:31:15.514 ] 00:31:15.514 }, 00:31:15.514 { 00:31:15.514 "subsystem": "bdev", 00:31:15.514 "config": [ 00:31:15.514 { 00:31:15.514 "method": "bdev_set_options", 00:31:15.514 "params": { 00:31:15.514 "bdev_io_pool_size": 65535, 00:31:15.514 "bdev_io_cache_size": 256, 00:31:15.514 "bdev_auto_examine": true, 00:31:15.514 "iobuf_small_cache_size": 128, 00:31:15.514 "iobuf_large_cache_size": 16 00:31:15.514 } 00:31:15.514 }, 00:31:15.514 { 00:31:15.514 "method": "bdev_raid_set_options", 00:31:15.514 "params": { 00:31:15.514 "process_window_size_kb": 1024, 00:31:15.514 "process_max_bandwidth_mb_sec": 0 00:31:15.514 } 00:31:15.514 }, 00:31:15.514 { 00:31:15.514 "method": "bdev_iscsi_set_options", 00:31:15.514 "params": { 00:31:15.514 "timeout_sec": 30 00:31:15.514 } 00:31:15.514 }, 00:31:15.514 { 00:31:15.514 "method": "bdev_nvme_set_options", 00:31:15.514 "params": { 00:31:15.514 "action_on_timeout": "none", 00:31:15.514 "timeout_us": 0, 00:31:15.514 "timeout_admin_us": 0, 00:31:15.514 "keep_alive_timeout_ms": 10000, 00:31:15.514 "arbitration_burst": 0, 00:31:15.514 "low_priority_weight": 0, 00:31:15.514 "medium_priority_weight": 0, 00:31:15.514 "high_priority_weight": 0, 00:31:15.514 "nvme_adminq_poll_period_us": 10000, 00:31:15.514 "nvme_ioq_poll_period_us": 0, 00:31:15.514 "io_queue_requests": 512, 00:31:15.514 "delay_cmd_submit": true, 00:31:15.514 "transport_retry_count": 4, 00:31:15.514 "bdev_retry_count": 3, 00:31:15.514 "transport_ack_timeout": 0, 00:31:15.514 "ctrlr_loss_timeout_sec": 0, 00:31:15.514 "reconnect_delay_sec": 0, 00:31:15.514 "fast_io_fail_timeout_sec": 0, 00:31:15.514 "disable_auto_failback": false, 00:31:15.514 "generate_uuids": false, 00:31:15.514 "transport_tos": 0, 00:31:15.514 "nvme_error_stat": false, 00:31:15.514 "rdma_srq_size": 0, 00:31:15.514 "io_path_stat": false, 00:31:15.514 "allow_accel_sequence": false, 00:31:15.514 "rdma_max_cq_size": 0, 00:31:15.514 "rdma_cm_event_timeout_ms": 0, 00:31:15.514 "dhchap_digests": [ 00:31:15.514 "sha256", 00:31:15.514 "sha384", 00:31:15.514 "sha512" 00:31:15.514 ], 00:31:15.514 "dhchap_dhgroups": [ 00:31:15.514 "null", 00:31:15.514 "ffdhe2048", 00:31:15.514 "ffdhe3072", 00:31:15.514 "ffdhe4096", 00:31:15.514 "ffdhe6144", 00:31:15.514 "ffdhe8192" 00:31:15.514 ] 00:31:15.514 } 00:31:15.514 }, 00:31:15.514 { 00:31:15.514 "method": "bdev_nvme_attach_controller", 00:31:15.514 "params": { 00:31:15.514 "name": "nvme0", 00:31:15.514 "trtype": "TCP", 00:31:15.514 "adrfam": "IPv4", 00:31:15.514 "traddr": "127.0.0.1", 00:31:15.514 "trsvcid": "4420", 00:31:15.514 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:15.514 "prchk_reftag": false, 00:31:15.514 "prchk_guard": false, 00:31:15.514 "ctrlr_loss_timeout_sec": 0, 00:31:15.514 "reconnect_delay_sec": 0, 00:31:15.514 "fast_io_fail_timeout_sec": 0, 00:31:15.514 "psk": "key0", 00:31:15.514 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:15.514 "hdgst": false, 00:31:15.514 "ddgst": false 00:31:15.514 } 00:31:15.514 }, 00:31:15.514 { 00:31:15.514 "method": "bdev_nvme_set_hotplug", 00:31:15.514 "params": { 00:31:15.514 "period_us": 100000, 00:31:15.514 "enable": false 00:31:15.514 } 00:31:15.514 }, 00:31:15.514 { 00:31:15.514 "method": "bdev_wait_for_examine" 00:31:15.514 } 00:31:15.514 ] 00:31:15.514 }, 00:31:15.514 { 00:31:15.514 "subsystem": "nbd", 00:31:15.514 "config": [] 00:31:15.514 } 00:31:15.514 ] 00:31:15.514 }' 00:31:15.514 [2024-07-22 18:38:27.286889] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:31:15.514 [2024-07-22 18:38:27.287070] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92792 ] 00:31:15.514 [2024-07-22 18:38:27.462045] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:15.772 [2024-07-22 18:38:27.698537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:16.030 [2024-07-22 18:38:27.983007] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:31:16.289 [2024-07-22 18:38:28.113046] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:16.289 18:38:28 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:16.289 18:38:28 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:31:16.289 18:38:28 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:31:16.289 18:38:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:16.289 18:38:28 keyring_file -- keyring/file.sh@120 -- # jq length 00:31:16.547 18:38:28 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:31:16.547 18:38:28 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:31:16.547 18:38:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:16.547 18:38:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:16.547 18:38:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:16.547 18:38:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:16.547 18:38:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:16.805 18:38:28 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:31:16.805 18:38:28 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:31:16.805 18:38:28 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:16.805 18:38:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:16.805 18:38:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:16.805 18:38:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:16.805 18:38:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:17.064 18:38:29 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:31:17.064 18:38:29 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:31:17.064 18:38:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:31:17.064 18:38:29 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:31:17.323 18:38:29 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:31:17.323 18:38:29 keyring_file -- keyring/file.sh@1 -- # cleanup 00:31:17.323 18:38:29 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.FMfKOiCmGN /tmp/tmp.X0DpmWcRLL 00:31:17.323 18:38:29 keyring_file -- keyring/file.sh@20 -- # killprocess 92792 00:31:17.323 18:38:29 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 92792 ']' 00:31:17.323 18:38:29 keyring_file -- common/autotest_common.sh@952 -- # kill -0 92792 00:31:17.323 18:38:29 keyring_file -- common/autotest_common.sh@953 -- # uname 00:31:17.323 18:38:29 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:17.323 18:38:29 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92792 00:31:17.581 killing process with pid 92792 00:31:17.581 Received shutdown signal, test time was about 1.000000 seconds 00:31:17.581 00:31:17.581 Latency(us) 00:31:17.581 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:17.581 =================================================================================================================== 00:31:17.581 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:17.581 18:38:29 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:17.581 18:38:29 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:17.581 18:38:29 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92792' 00:31:17.581 18:38:29 keyring_file -- common/autotest_common.sh@967 -- # kill 92792 00:31:17.581 18:38:29 keyring_file -- common/autotest_common.sh@972 -- # wait 92792 00:31:18.956 18:38:30 keyring_file -- keyring/file.sh@21 -- # killprocess 92507 00:31:18.956 18:38:30 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 92507 ']' 00:31:18.956 18:38:30 keyring_file -- common/autotest_common.sh@952 -- # kill -0 92507 00:31:18.956 18:38:30 keyring_file -- common/autotest_common.sh@953 -- # uname 00:31:18.956 18:38:30 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:18.956 18:38:30 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92507 00:31:18.956 killing process with pid 92507 00:31:18.956 18:38:30 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:18.956 18:38:30 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:18.956 18:38:30 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92507' 00:31:18.956 18:38:30 keyring_file -- common/autotest_common.sh@967 -- # kill 92507 00:31:18.956 [2024-07-22 18:38:30.565357] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:31:18.956 18:38:30 keyring_file -- common/autotest_common.sh@972 -- # wait 92507 00:31:20.873 ************************************ 00:31:20.873 END TEST keyring_file 00:31:20.873 ************************************ 00:31:20.873 00:31:20.873 real 0m20.347s 00:31:20.873 user 0m46.171s 00:31:20.873 sys 0m3.478s 00:31:20.873 18:38:32 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:20.873 18:38:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:21.170 18:38:32 -- common/autotest_common.sh@1142 -- # return 0 00:31:21.170 18:38:32 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:31:21.170 18:38:32 -- spdk/autotest.sh@297 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:31:21.170 18:38:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:21.170 18:38:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:21.170 18:38:32 -- common/autotest_common.sh@10 -- # set +x 00:31:21.170 ************************************ 00:31:21.170 START TEST keyring_linux 00:31:21.170 ************************************ 00:31:21.170 18:38:32 keyring_linux -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:31:21.170 * Looking for test storage... 00:31:21.170 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:31:21.170 18:38:32 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:31:21.170 18:38:32 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:21.170 18:38:32 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:31:21.170 18:38:32 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:21.170 18:38:32 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:21.170 18:38:32 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:21.170 18:38:32 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:21.170 18:38:32 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:21.170 18:38:32 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:21.170 18:38:32 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:21.170 18:38:32 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:21.170 18:38:32 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:21.170 18:38:32 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:21.170 18:38:33 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1e224894-a0fc-4112-b81b-a37606f50c96 00:31:21.170 18:38:33 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=1e224894-a0fc-4112-b81b-a37606f50c96 00:31:21.170 18:38:33 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:21.170 18:38:33 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:21.170 18:38:33 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:21.170 18:38:33 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:21.170 18:38:33 keyring_linux -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:21.170 18:38:33 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:21.170 18:38:33 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:21.170 18:38:33 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:21.170 18:38:33 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.170 18:38:33 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.171 18:38:33 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.171 18:38:33 keyring_linux -- paths/export.sh@5 -- # export PATH 00:31:21.171 18:38:33 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.171 18:38:33 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:31:21.171 18:38:33 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:21.171 18:38:33 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:21.171 18:38:33 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:21.171 18:38:33 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:21.171 18:38:33 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:21.171 18:38:33 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:21.171 18:38:33 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:21.171 18:38:33 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:21.171 18:38:33 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:31:21.171 18:38:33 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:31:21.171 18:38:33 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:31:21.171 18:38:33 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:31:21.171 18:38:33 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:31:21.171 18:38:33 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:31:21.171 18:38:33 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:31:21.171 18:38:33 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:31:21.171 18:38:33 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:31:21.171 18:38:33 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:21.171 18:38:33 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:31:21.171 18:38:33 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:31:21.171 18:38:33 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:21.171 18:38:33 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:21.171 18:38:33 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:31:21.171 18:38:33 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:21.171 18:38:33 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:31:21.171 18:38:33 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:31:21.171 18:38:33 keyring_linux -- nvmf/common.sh@705 -- # python - 00:31:21.171 18:38:33 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:31:21.171 /tmp/:spdk-test:key0 00:31:21.171 18:38:33 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:31:21.171 18:38:33 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:31:21.171 18:38:33 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:31:21.171 18:38:33 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:31:21.171 18:38:33 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:31:21.171 18:38:33 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:31:21.171 18:38:33 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:31:21.171 18:38:33 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:31:21.171 18:38:33 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:31:21.171 18:38:33 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:31:21.171 18:38:33 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:21.171 18:38:33 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:31:21.171 18:38:33 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:31:21.171 18:38:33 keyring_linux -- nvmf/common.sh@705 -- # python - 00:31:21.171 18:38:33 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:31:21.171 /tmp/:spdk-test:key1 00:31:21.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:21.171 18:38:33 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:31:21.171 18:38:33 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=92931 00:31:21.171 18:38:33 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:21.171 18:38:33 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 92931 00:31:21.171 18:38:33 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 92931 ']' 00:31:21.171 18:38:33 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:21.171 18:38:33 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:21.171 18:38:33 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:21.171 18:38:33 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:21.171 18:38:33 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:21.431 [2024-07-22 18:38:33.278239] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:31:21.431 [2024-07-22 18:38:33.278655] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92931 ] 00:31:21.690 [2024-07-22 18:38:33.457842] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:21.949 [2024-07-22 18:38:33.739424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:21.949 [2024-07-22 18:38:33.949673] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:31:22.885 18:38:34 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:22.885 18:38:34 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:31:22.885 18:38:34 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:31:22.885 18:38:34 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:22.885 18:38:34 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:22.885 [2024-07-22 18:38:34.565187] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:22.885 null0 00:31:22.885 [2024-07-22 18:38:34.597190] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:22.885 [2024-07-22 18:38:34.597505] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:22.885 18:38:34 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:22.885 18:38:34 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:31:22.885 947104935 00:31:22.885 18:38:34 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:31:22.885 557612738 00:31:22.885 18:38:34 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=92954 00:31:22.885 18:38:34 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:31:22.885 18:38:34 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 92954 /var/tmp/bperf.sock 00:31:22.885 18:38:34 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 92954 ']' 00:31:22.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:22.885 18:38:34 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:22.885 18:38:34 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:22.885 18:38:34 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:22.885 18:38:34 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:22.885 18:38:34 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:22.885 [2024-07-22 18:38:34.716502] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:31:22.885 [2024-07-22 18:38:34.716886] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92954 ] 00:31:22.885 [2024-07-22 18:38:34.885438] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:23.143 [2024-07-22 18:38:35.147842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:23.710 18:38:35 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:23.710 18:38:35 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:31:23.710 18:38:35 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:31:23.710 18:38:35 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:31:23.978 18:38:35 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:31:23.978 18:38:35 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:24.569 [2024-07-22 18:38:36.395873] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:31:24.569 18:38:36 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:31:24.569 18:38:36 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:31:24.828 [2024-07-22 18:38:36.779622] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:25.086 nvme0n1 00:31:25.086 18:38:36 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:31:25.086 18:38:36 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:31:25.086 18:38:36 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:31:25.086 18:38:36 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:31:25.086 18:38:36 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:31:25.086 18:38:36 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:25.345 18:38:37 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:31:25.345 18:38:37 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:31:25.345 18:38:37 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:31:25.345 18:38:37 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:31:25.345 18:38:37 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:25.345 18:38:37 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:25.345 18:38:37 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:31:25.604 18:38:37 keyring_linux -- keyring/linux.sh@25 -- # sn=947104935 00:31:25.604 18:38:37 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:31:25.604 18:38:37 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:31:25.604 18:38:37 keyring_linux -- keyring/linux.sh@26 -- # [[ 947104935 == \9\4\7\1\0\4\9\3\5 ]] 00:31:25.604 18:38:37 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 947104935 00:31:25.604 18:38:37 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:31:25.604 18:38:37 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:25.604 Running I/O for 1 seconds... 00:31:26.980 00:31:26.980 Latency(us) 00:31:26.980 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:26.980 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:31:26.980 nvme0n1 : 1.01 8561.96 33.45 0.00 0.00 14832.33 5540.77 20971.52 00:31:26.980 =================================================================================================================== 00:31:26.980 Total : 8561.96 33.45 0.00 0.00 14832.33 5540.77 20971.52 00:31:26.980 0 00:31:26.980 18:38:38 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:26.980 18:38:38 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:26.980 18:38:38 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:31:26.980 18:38:38 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:31:26.980 18:38:38 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:31:26.980 18:38:38 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:31:26.980 18:38:38 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:26.980 18:38:38 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:31:27.239 18:38:39 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:31:27.239 18:38:39 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:31:27.239 18:38:39 keyring_linux -- keyring/linux.sh@23 -- # return 00:31:27.239 18:38:39 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:27.239 18:38:39 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:31:27.239 18:38:39 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:27.239 18:38:39 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:31:27.239 18:38:39 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:27.239 18:38:39 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:31:27.239 18:38:39 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:27.239 18:38:39 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:27.239 18:38:39 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:27.512 [2024-07-22 18:38:39.496250] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:31:27.512 [2024-07-22 18:38:39.497130] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002f880 (107): Transport endpoint is not connected 00:31:27.512 [2024-07-22 18:38:39.498096] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002f880 (9): Bad file descriptor 00:31:27.512 [2024-07-22 18:38:39.499090] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:27.512 [2024-07-22 18:38:39.499334] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:31:27.512 [2024-07-22 18:38:39.499574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:27.512 request: 00:31:27.512 { 00:31:27.512 "name": "nvme0", 00:31:27.512 "trtype": "tcp", 00:31:27.512 "traddr": "127.0.0.1", 00:31:27.512 "adrfam": "ipv4", 00:31:27.512 "trsvcid": "4420", 00:31:27.512 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:27.512 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:27.512 "prchk_reftag": false, 00:31:27.512 "prchk_guard": false, 00:31:27.512 "hdgst": false, 00:31:27.512 "ddgst": false, 00:31:27.512 "psk": ":spdk-test:key1", 00:31:27.512 "method": "bdev_nvme_attach_controller", 00:31:27.512 "req_id": 1 00:31:27.512 } 00:31:27.512 Got JSON-RPC error response 00:31:27.512 response: 00:31:27.512 { 00:31:27.512 "code": -5, 00:31:27.512 "message": "Input/output error" 00:31:27.512 } 00:31:27.512 18:38:39 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:31:27.780 18:38:39 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:27.781 18:38:39 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:27.781 18:38:39 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:27.781 18:38:39 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:31:27.781 18:38:39 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:31:27.781 18:38:39 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:31:27.781 18:38:39 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:31:27.781 18:38:39 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:31:27.781 18:38:39 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:31:27.781 18:38:39 keyring_linux -- keyring/linux.sh@33 -- # sn=947104935 00:31:27.781 18:38:39 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 947104935 00:31:27.781 1 links removed 00:31:27.781 18:38:39 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:31:27.781 18:38:39 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:31:27.781 18:38:39 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:31:27.781 18:38:39 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:31:27.781 18:38:39 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:31:27.781 18:38:39 keyring_linux -- keyring/linux.sh@33 -- # sn=557612738 00:31:27.781 18:38:39 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 557612738 00:31:27.781 1 links removed 00:31:27.781 18:38:39 keyring_linux -- keyring/linux.sh@41 -- # killprocess 92954 00:31:27.781 18:38:39 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 92954 ']' 00:31:27.781 18:38:39 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 92954 00:31:27.781 18:38:39 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:31:27.781 18:38:39 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:27.781 18:38:39 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92954 00:31:27.781 killing process with pid 92954 00:31:27.781 Received shutdown signal, test time was about 1.000000 seconds 00:31:27.781 00:31:27.781 Latency(us) 00:31:27.781 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:27.781 =================================================================================================================== 00:31:27.781 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:27.781 18:38:39 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:27.781 18:38:39 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:27.781 18:38:39 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92954' 00:31:27.781 18:38:39 keyring_linux -- common/autotest_common.sh@967 -- # kill 92954 00:31:27.781 18:38:39 keyring_linux -- common/autotest_common.sh@972 -- # wait 92954 00:31:28.720 18:38:40 keyring_linux -- keyring/linux.sh@42 -- # killprocess 92931 00:31:28.720 18:38:40 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 92931 ']' 00:31:28.720 18:38:40 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 92931 00:31:28.720 18:38:40 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:31:28.720 18:38:40 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:28.720 18:38:40 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92931 00:31:28.720 killing process with pid 92931 00:31:28.720 18:38:40 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:28.720 18:38:40 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:28.720 18:38:40 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92931' 00:31:28.720 18:38:40 keyring_linux -- common/autotest_common.sh@967 -- # kill 92931 00:31:28.720 18:38:40 keyring_linux -- common/autotest_common.sh@972 -- # wait 92931 00:31:31.255 ************************************ 00:31:31.255 END TEST keyring_linux 00:31:31.255 ************************************ 00:31:31.255 00:31:31.255 real 0m10.053s 00:31:31.255 user 0m17.411s 00:31:31.255 sys 0m1.869s 00:31:31.255 18:38:42 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:31.255 18:38:42 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:31.255 18:38:43 -- common/autotest_common.sh@1142 -- # return 0 00:31:31.255 18:38:43 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:31:31.255 18:38:43 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:31:31.255 18:38:43 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:31:31.255 18:38:43 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:31:31.255 18:38:43 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:31:31.255 18:38:43 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:31:31.255 18:38:43 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:31:31.255 18:38:43 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:31:31.255 18:38:43 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:31:31.255 18:38:43 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:31:31.255 18:38:43 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:31:31.255 18:38:43 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:31:31.255 18:38:43 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:31:31.255 18:38:43 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:31:31.255 18:38:43 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:31:31.255 18:38:43 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:31:31.255 18:38:43 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:31:31.255 18:38:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:31.255 18:38:43 -- common/autotest_common.sh@10 -- # set +x 00:31:31.255 18:38:43 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:31:31.255 18:38:43 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:31:31.255 18:38:43 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:31:31.255 18:38:43 -- common/autotest_common.sh@10 -- # set +x 00:31:32.633 INFO: APP EXITING 00:31:32.633 INFO: killing all VMs 00:31:32.633 INFO: killing vhost app 00:31:32.633 INFO: EXIT DONE 00:31:33.200 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:33.458 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:31:33.458 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:31:34.025 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:34.025 Cleaning 00:31:34.025 Removing: /var/run/dpdk/spdk0/config 00:31:34.025 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:31:34.025 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:31:34.025 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:31:34.025 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:31:34.025 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:31:34.025 Removing: /var/run/dpdk/spdk0/hugepage_info 00:31:34.025 Removing: /var/run/dpdk/spdk1/config 00:31:34.025 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:31:34.025 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:31:34.025 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:31:34.025 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:31:34.025 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:31:34.025 Removing: /var/run/dpdk/spdk1/hugepage_info 00:31:34.025 Removing: /var/run/dpdk/spdk2/config 00:31:34.025 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:31:34.025 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:31:34.025 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:31:34.025 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:31:34.025 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:31:34.025 Removing: /var/run/dpdk/spdk2/hugepage_info 00:31:34.025 Removing: /var/run/dpdk/spdk3/config 00:31:34.025 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:31:34.025 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:31:34.025 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:31:34.025 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:31:34.025 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:31:34.025 Removing: /var/run/dpdk/spdk3/hugepage_info 00:31:34.283 Removing: /var/run/dpdk/spdk4/config 00:31:34.283 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:31:34.283 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:31:34.283 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:31:34.283 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:31:34.283 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:31:34.283 Removing: /var/run/dpdk/spdk4/hugepage_info 00:31:34.283 Removing: /dev/shm/nvmf_trace.0 00:31:34.283 Removing: /dev/shm/spdk_tgt_trace.pid59588 00:31:34.283 Removing: /var/run/dpdk/spdk0 00:31:34.283 Removing: /var/run/dpdk/spdk1 00:31:34.283 Removing: /var/run/dpdk/spdk2 00:31:34.283 Removing: /var/run/dpdk/spdk3 00:31:34.283 Removing: /var/run/dpdk/spdk4 00:31:34.283 Removing: /var/run/dpdk/spdk_pid59361 00:31:34.283 Removing: /var/run/dpdk/spdk_pid59588 00:31:34.283 Removing: /var/run/dpdk/spdk_pid59809 00:31:34.283 Removing: /var/run/dpdk/spdk_pid59914 00:31:34.283 Removing: /var/run/dpdk/spdk_pid59969 00:31:34.283 Removing: /var/run/dpdk/spdk_pid60103 00:31:34.283 Removing: /var/run/dpdk/spdk_pid60126 00:31:34.283 Removing: /var/run/dpdk/spdk_pid60275 00:31:34.283 Removing: /var/run/dpdk/spdk_pid60490 00:31:34.283 Removing: /var/run/dpdk/spdk_pid60653 00:31:34.283 Removing: /var/run/dpdk/spdk_pid60757 00:31:34.283 Removing: /var/run/dpdk/spdk_pid60856 00:31:34.283 Removing: /var/run/dpdk/spdk_pid60970 00:31:34.283 Removing: /var/run/dpdk/spdk_pid61070 00:31:34.283 Removing: /var/run/dpdk/spdk_pid61115 00:31:34.283 Removing: /var/run/dpdk/spdk_pid61157 00:31:34.283 Removing: /var/run/dpdk/spdk_pid61225 00:31:34.283 Removing: /var/run/dpdk/spdk_pid61331 00:31:34.283 Removing: /var/run/dpdk/spdk_pid61789 00:31:34.283 Removing: /var/run/dpdk/spdk_pid61864 00:31:34.283 Removing: /var/run/dpdk/spdk_pid61936 00:31:34.283 Removing: /var/run/dpdk/spdk_pid61957 00:31:34.283 Removing: /var/run/dpdk/spdk_pid62101 00:31:34.283 Removing: /var/run/dpdk/spdk_pid62122 00:31:34.283 Removing: /var/run/dpdk/spdk_pid62266 00:31:34.283 Removing: /var/run/dpdk/spdk_pid62292 00:31:34.283 Removing: /var/run/dpdk/spdk_pid62356 00:31:34.283 Removing: /var/run/dpdk/spdk_pid62380 00:31:34.283 Removing: /var/run/dpdk/spdk_pid62444 00:31:34.283 Removing: /var/run/dpdk/spdk_pid62462 00:31:34.283 Removing: /var/run/dpdk/spdk_pid62649 00:31:34.283 Removing: /var/run/dpdk/spdk_pid62691 00:31:34.283 Removing: /var/run/dpdk/spdk_pid62772 00:31:34.283 Removing: /var/run/dpdk/spdk_pid62847 00:31:34.283 Removing: /var/run/dpdk/spdk_pid62884 00:31:34.283 Removing: /var/run/dpdk/spdk_pid62962 00:31:34.283 Removing: /var/run/dpdk/spdk_pid63003 00:31:34.283 Removing: /var/run/dpdk/spdk_pid63055 00:31:34.283 Removing: /var/run/dpdk/spdk_pid63102 00:31:34.283 Removing: /var/run/dpdk/spdk_pid63148 00:31:34.283 Removing: /var/run/dpdk/spdk_pid63195 00:31:34.283 Removing: /var/run/dpdk/spdk_pid63241 00:31:34.283 Removing: /var/run/dpdk/spdk_pid63288 00:31:34.283 Removing: /var/run/dpdk/spdk_pid63340 00:31:34.283 Removing: /var/run/dpdk/spdk_pid63381 00:31:34.283 Removing: /var/run/dpdk/spdk_pid63433 00:31:34.283 Removing: /var/run/dpdk/spdk_pid63474 00:31:34.283 Removing: /var/run/dpdk/spdk_pid63525 00:31:34.283 Removing: /var/run/dpdk/spdk_pid63567 00:31:34.283 Removing: /var/run/dpdk/spdk_pid63619 00:31:34.283 Removing: /var/run/dpdk/spdk_pid63666 00:31:34.283 Removing: /var/run/dpdk/spdk_pid63707 00:31:34.283 Removing: /var/run/dpdk/spdk_pid63762 00:31:34.284 Removing: /var/run/dpdk/spdk_pid63810 00:31:34.284 Removing: /var/run/dpdk/spdk_pid63858 00:31:34.284 Removing: /var/run/dpdk/spdk_pid63910 00:31:34.284 Removing: /var/run/dpdk/spdk_pid63993 00:31:34.284 Removing: /var/run/dpdk/spdk_pid64109 00:31:34.284 Removing: /var/run/dpdk/spdk_pid64437 00:31:34.284 Removing: /var/run/dpdk/spdk_pid64455 00:31:34.284 Removing: /var/run/dpdk/spdk_pid64504 00:31:34.284 Removing: /var/run/dpdk/spdk_pid64535 00:31:34.284 Removing: /var/run/dpdk/spdk_pid64568 00:31:34.284 Removing: /var/run/dpdk/spdk_pid64605 00:31:34.284 Removing: /var/run/dpdk/spdk_pid64631 00:31:34.284 Removing: /var/run/dpdk/spdk_pid64669 00:31:34.284 Removing: /var/run/dpdk/spdk_pid64705 00:31:34.284 Removing: /var/run/dpdk/spdk_pid64736 00:31:34.284 Removing: /var/run/dpdk/spdk_pid64769 00:31:34.284 Removing: /var/run/dpdk/spdk_pid64806 00:31:34.284 Removing: /var/run/dpdk/spdk_pid64838 00:31:34.284 Removing: /var/run/dpdk/spdk_pid64871 00:31:34.284 Removing: /var/run/dpdk/spdk_pid64909 00:31:34.542 Removing: /var/run/dpdk/spdk_pid64940 00:31:34.542 Removing: /var/run/dpdk/spdk_pid64972 00:31:34.542 Removing: /var/run/dpdk/spdk_pid65008 00:31:34.542 Removing: /var/run/dpdk/spdk_pid65038 00:31:34.542 Removing: /var/run/dpdk/spdk_pid65071 00:31:34.542 Removing: /var/run/dpdk/spdk_pid65115 00:31:34.542 Removing: /var/run/dpdk/spdk_pid65147 00:31:34.542 Removing: /var/run/dpdk/spdk_pid65194 00:31:34.542 Removing: /var/run/dpdk/spdk_pid65276 00:31:34.542 Removing: /var/run/dpdk/spdk_pid65322 00:31:34.542 Removing: /var/run/dpdk/spdk_pid65349 00:31:34.542 Removing: /var/run/dpdk/spdk_pid65395 00:31:34.542 Removing: /var/run/dpdk/spdk_pid65422 00:31:34.542 Removing: /var/run/dpdk/spdk_pid65447 00:31:34.542 Removing: /var/run/dpdk/spdk_pid65507 00:31:34.542 Removing: /var/run/dpdk/spdk_pid65538 00:31:34.542 Removing: /var/run/dpdk/spdk_pid65584 00:31:34.542 Removing: /var/run/dpdk/spdk_pid65610 00:31:34.542 Removing: /var/run/dpdk/spdk_pid65633 00:31:34.542 Removing: /var/run/dpdk/spdk_pid65660 00:31:34.542 Removing: /var/run/dpdk/spdk_pid65687 00:31:34.542 Removing: /var/run/dpdk/spdk_pid65714 00:31:34.542 Removing: /var/run/dpdk/spdk_pid65741 00:31:34.542 Removing: /var/run/dpdk/spdk_pid65769 00:31:34.542 Removing: /var/run/dpdk/spdk_pid65815 00:31:34.542 Removing: /var/run/dpdk/spdk_pid65858 00:31:34.542 Removing: /var/run/dpdk/spdk_pid65881 00:31:34.542 Removing: /var/run/dpdk/spdk_pid65927 00:31:34.542 Removing: /var/run/dpdk/spdk_pid65954 00:31:34.542 Removing: /var/run/dpdk/spdk_pid65979 00:31:34.542 Removing: /var/run/dpdk/spdk_pid66037 00:31:34.542 Removing: /var/run/dpdk/spdk_pid66066 00:31:34.542 Removing: /var/run/dpdk/spdk_pid66109 00:31:34.542 Removing: /var/run/dpdk/spdk_pid66130 00:31:34.542 Removing: /var/run/dpdk/spdk_pid66155 00:31:34.542 Removing: /var/run/dpdk/spdk_pid66180 00:31:34.542 Removing: /var/run/dpdk/spdk_pid66205 00:31:34.542 Removing: /var/run/dpdk/spdk_pid66230 00:31:34.542 Removing: /var/run/dpdk/spdk_pid66255 00:31:34.542 Removing: /var/run/dpdk/spdk_pid66280 00:31:34.542 Removing: /var/run/dpdk/spdk_pid66367 00:31:34.542 Removing: /var/run/dpdk/spdk_pid66471 00:31:34.542 Removing: /var/run/dpdk/spdk_pid66638 00:31:34.542 Removing: /var/run/dpdk/spdk_pid66691 00:31:34.542 Removing: /var/run/dpdk/spdk_pid66751 00:31:34.542 Removing: /var/run/dpdk/spdk_pid66783 00:31:34.542 Removing: /var/run/dpdk/spdk_pid66820 00:31:34.542 Removing: /var/run/dpdk/spdk_pid66851 00:31:34.542 Removing: /var/run/dpdk/spdk_pid66901 00:31:34.542 Removing: /var/run/dpdk/spdk_pid66934 00:31:34.542 Removing: /var/run/dpdk/spdk_pid67016 00:31:34.543 Removing: /var/run/dpdk/spdk_pid67066 00:31:34.543 Removing: /var/run/dpdk/spdk_pid67154 00:31:34.543 Removing: /var/run/dpdk/spdk_pid67289 00:31:34.543 Removing: /var/run/dpdk/spdk_pid67387 00:31:34.543 Removing: /var/run/dpdk/spdk_pid67451 00:31:34.543 Removing: /var/run/dpdk/spdk_pid67571 00:31:34.543 Removing: /var/run/dpdk/spdk_pid67636 00:31:34.543 Removing: /var/run/dpdk/spdk_pid67688 00:31:34.543 Removing: /var/run/dpdk/spdk_pid67932 00:31:34.543 Removing: /var/run/dpdk/spdk_pid68050 00:31:34.543 Removing: /var/run/dpdk/spdk_pid68095 00:31:34.543 Removing: /var/run/dpdk/spdk_pid68439 00:31:34.543 Removing: /var/run/dpdk/spdk_pid68483 00:31:34.543 Removing: /var/run/dpdk/spdk_pid68799 00:31:34.543 Removing: /var/run/dpdk/spdk_pid69223 00:31:34.543 Removing: /var/run/dpdk/spdk_pid69502 00:31:34.543 Removing: /var/run/dpdk/spdk_pid70345 00:31:34.543 Removing: /var/run/dpdk/spdk_pid71200 00:31:34.543 Removing: /var/run/dpdk/spdk_pid71334 00:31:34.543 Removing: /var/run/dpdk/spdk_pid71415 00:31:34.543 Removing: /var/run/dpdk/spdk_pid72718 00:31:34.543 Removing: /var/run/dpdk/spdk_pid73022 00:31:34.543 Removing: /var/run/dpdk/spdk_pid76429 00:31:34.543 Removing: /var/run/dpdk/spdk_pid76776 00:31:34.543 Removing: /var/run/dpdk/spdk_pid76887 00:31:34.543 Removing: /var/run/dpdk/spdk_pid77028 00:31:34.543 Removing: /var/run/dpdk/spdk_pid77062 00:31:34.543 Removing: /var/run/dpdk/spdk_pid77097 00:31:34.543 Removing: /var/run/dpdk/spdk_pid77140 00:31:34.543 Removing: /var/run/dpdk/spdk_pid77253 00:31:34.543 Removing: /var/run/dpdk/spdk_pid77396 00:31:34.543 Removing: /var/run/dpdk/spdk_pid77583 00:31:34.543 Removing: /var/run/dpdk/spdk_pid77688 00:31:34.543 Removing: /var/run/dpdk/spdk_pid77899 00:31:34.543 Removing: /var/run/dpdk/spdk_pid78002 00:31:34.543 Removing: /var/run/dpdk/spdk_pid78118 00:31:34.543 Removing: /var/run/dpdk/spdk_pid78444 00:31:34.543 Removing: /var/run/dpdk/spdk_pid78814 00:31:34.800 Removing: /var/run/dpdk/spdk_pid78827 00:31:34.800 Removing: /var/run/dpdk/spdk_pid81090 00:31:34.800 Removing: /var/run/dpdk/spdk_pid81099 00:31:34.800 Removing: /var/run/dpdk/spdk_pid81393 00:31:34.800 Removing: /var/run/dpdk/spdk_pid81409 00:31:34.800 Removing: /var/run/dpdk/spdk_pid81424 00:31:34.800 Removing: /var/run/dpdk/spdk_pid81456 00:31:34.800 Removing: /var/run/dpdk/spdk_pid81468 00:31:34.800 Removing: /var/run/dpdk/spdk_pid81558 00:31:34.800 Removing: /var/run/dpdk/spdk_pid81566 00:31:34.800 Removing: /var/run/dpdk/spdk_pid81670 00:31:34.800 Removing: /var/run/dpdk/spdk_pid81679 00:31:34.800 Removing: /var/run/dpdk/spdk_pid81783 00:31:34.800 Removing: /var/run/dpdk/spdk_pid81786 00:31:34.800 Removing: /var/run/dpdk/spdk_pid82190 00:31:34.800 Removing: /var/run/dpdk/spdk_pid82232 00:31:34.800 Removing: /var/run/dpdk/spdk_pid82334 00:31:34.800 Removing: /var/run/dpdk/spdk_pid82411 00:31:34.800 Removing: /var/run/dpdk/spdk_pid82724 00:31:34.801 Removing: /var/run/dpdk/spdk_pid82931 00:31:34.801 Removing: /var/run/dpdk/spdk_pid83328 00:31:34.801 Removing: /var/run/dpdk/spdk_pid83849 00:31:34.801 Removing: /var/run/dpdk/spdk_pid84668 00:31:34.801 Removing: /var/run/dpdk/spdk_pid85274 00:31:34.801 Removing: /var/run/dpdk/spdk_pid85284 00:31:34.801 Removing: /var/run/dpdk/spdk_pid87202 00:31:34.801 Removing: /var/run/dpdk/spdk_pid87276 00:31:34.801 Removing: /var/run/dpdk/spdk_pid87343 00:31:34.801 Removing: /var/run/dpdk/spdk_pid87414 00:31:34.801 Removing: /var/run/dpdk/spdk_pid87564 00:31:34.801 Removing: /var/run/dpdk/spdk_pid87632 00:31:34.801 Removing: /var/run/dpdk/spdk_pid87699 00:31:34.801 Removing: /var/run/dpdk/spdk_pid87765 00:31:34.801 Removing: /var/run/dpdk/spdk_pid88106 00:31:34.801 Removing: /var/run/dpdk/spdk_pid89270 00:31:34.801 Removing: /var/run/dpdk/spdk_pid89418 00:31:34.801 Removing: /var/run/dpdk/spdk_pid89663 00:31:34.801 Removing: /var/run/dpdk/spdk_pid90223 00:31:34.801 Removing: /var/run/dpdk/spdk_pid90382 00:31:34.801 Removing: /var/run/dpdk/spdk_pid90544 00:31:34.801 Removing: /var/run/dpdk/spdk_pid90640 00:31:34.801 Removing: /var/run/dpdk/spdk_pid90803 00:31:34.801 Removing: /var/run/dpdk/spdk_pid90914 00:31:34.801 Removing: /var/run/dpdk/spdk_pid91588 00:31:34.801 Removing: /var/run/dpdk/spdk_pid91624 00:31:34.801 Removing: /var/run/dpdk/spdk_pid91655 00:31:34.801 Removing: /var/run/dpdk/spdk_pid92017 00:31:34.801 Removing: /var/run/dpdk/spdk_pid92055 00:31:34.801 Removing: /var/run/dpdk/spdk_pid92087 00:31:34.801 Removing: /var/run/dpdk/spdk_pid92507 00:31:34.801 Removing: /var/run/dpdk/spdk_pid92530 00:31:34.801 Removing: /var/run/dpdk/spdk_pid92792 00:31:34.801 Removing: /var/run/dpdk/spdk_pid92931 00:31:34.801 Removing: /var/run/dpdk/spdk_pid92954 00:31:34.801 Clean 00:31:34.801 18:38:46 -- common/autotest_common.sh@1451 -- # return 0 00:31:34.801 18:38:46 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:31:34.801 18:38:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:34.801 18:38:46 -- common/autotest_common.sh@10 -- # set +x 00:31:35.059 18:38:46 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:31:35.059 18:38:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:35.059 18:38:46 -- common/autotest_common.sh@10 -- # set +x 00:31:35.059 18:38:46 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:31:35.059 18:38:46 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:31:35.059 18:38:46 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:31:35.059 18:38:46 -- spdk/autotest.sh@391 -- # hash lcov 00:31:35.059 18:38:46 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:31:35.059 18:38:46 -- spdk/autotest.sh@393 -- # hostname 00:31:35.059 18:38:46 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:31:35.317 geninfo: WARNING: invalid characters removed from testname! 00:32:07.391 18:39:14 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:07.391 18:39:18 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:09.288 18:39:20 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:11.811 18:39:23 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:15.093 18:39:26 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:17.624 18:39:29 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:20.172 18:39:32 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:32:20.434 18:39:32 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:20.434 18:39:32 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:32:20.434 18:39:32 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:20.434 18:39:32 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:20.434 18:39:32 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.434 18:39:32 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.434 18:39:32 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.434 18:39:32 -- paths/export.sh@5 -- $ export PATH 00:32:20.434 18:39:32 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.434 18:39:32 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:32:20.434 18:39:32 -- common/autobuild_common.sh@447 -- $ date +%s 00:32:20.434 18:39:32 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721673572.XXXXXX 00:32:20.434 18:39:32 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721673572.1miMkL 00:32:20.434 18:39:32 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:32:20.434 18:39:32 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:32:20.434 18:39:32 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:32:20.434 18:39:32 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:32:20.434 18:39:32 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:32:20.434 18:39:32 -- common/autobuild_common.sh@463 -- $ get_config_params 00:32:20.434 18:39:32 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:32:20.434 18:39:32 -- common/autotest_common.sh@10 -- $ set +x 00:32:20.434 18:39:32 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-uring' 00:32:20.434 18:39:32 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:32:20.434 18:39:32 -- pm/common@17 -- $ local monitor 00:32:20.434 18:39:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:20.434 18:39:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:20.434 18:39:32 -- pm/common@25 -- $ sleep 1 00:32:20.434 18:39:32 -- pm/common@21 -- $ date +%s 00:32:20.434 18:39:32 -- pm/common@21 -- $ date +%s 00:32:20.434 18:39:32 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721673572 00:32:20.434 18:39:32 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721673572 00:32:20.434 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721673572_collect-vmstat.pm.log 00:32:20.434 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721673572_collect-cpu-load.pm.log 00:32:21.373 18:39:33 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:32:21.373 18:39:33 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:32:21.373 18:39:33 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:32:21.373 18:39:33 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:32:21.373 18:39:33 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:32:21.373 18:39:33 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:32:21.373 18:39:33 -- spdk/autopackage.sh@19 -- $ timing_finish 00:32:21.373 18:39:33 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:32:21.373 18:39:33 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:32:21.373 18:39:33 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:21.373 18:39:33 -- spdk/autopackage.sh@20 -- $ exit 0 00:32:21.373 18:39:33 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:32:21.373 18:39:33 -- pm/common@29 -- $ signal_monitor_resources TERM 00:32:21.373 18:39:33 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:32:21.373 18:39:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:21.373 18:39:33 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:32:21.373 18:39:33 -- pm/common@44 -- $ pid=94727 00:32:21.373 18:39:33 -- pm/common@50 -- $ kill -TERM 94727 00:32:21.373 18:39:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:21.373 18:39:33 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:32:21.373 18:39:33 -- pm/common@44 -- $ pid=94729 00:32:21.373 18:39:33 -- pm/common@50 -- $ kill -TERM 94729 00:32:21.373 + [[ -n 5167 ]] 00:32:21.373 + sudo kill 5167 00:32:21.382 [Pipeline] } 00:32:21.398 [Pipeline] // timeout 00:32:21.403 [Pipeline] } 00:32:21.430 [Pipeline] // stage 00:32:21.446 [Pipeline] } 00:32:21.475 [Pipeline] // catchError 00:32:21.480 [Pipeline] stage 00:32:21.482 [Pipeline] { (Stop VM) 00:32:21.490 [Pipeline] sh 00:32:21.764 + vagrant halt 00:32:26.007 ==> default: Halting domain... 00:32:31.285 [Pipeline] sh 00:32:31.577 + vagrant destroy -f 00:32:34.873 ==> default: Removing domain... 00:32:34.886 [Pipeline] sh 00:32:35.166 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:32:35.176 [Pipeline] } 00:32:35.194 [Pipeline] // stage 00:32:35.199 [Pipeline] } 00:32:35.216 [Pipeline] // dir 00:32:35.222 [Pipeline] } 00:32:35.240 [Pipeline] // wrap 00:32:35.247 [Pipeline] } 00:32:35.263 [Pipeline] // catchError 00:32:35.273 [Pipeline] stage 00:32:35.276 [Pipeline] { (Epilogue) 00:32:35.291 [Pipeline] sh 00:32:35.578 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:32:42.171 [Pipeline] catchError 00:32:42.173 [Pipeline] { 00:32:42.188 [Pipeline] sh 00:32:42.465 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:32:42.723 Artifacts sizes are good 00:32:42.731 [Pipeline] } 00:32:42.744 [Pipeline] // catchError 00:32:42.751 [Pipeline] archiveArtifacts 00:32:42.756 Archiving artifacts 00:32:42.928 [Pipeline] cleanWs 00:32:42.939 [WS-CLEANUP] Deleting project workspace... 00:32:42.939 [WS-CLEANUP] Deferred wipeout is used... 00:32:42.945 [WS-CLEANUP] done 00:32:42.947 [Pipeline] } 00:32:42.962 [Pipeline] // stage 00:32:42.968 [Pipeline] } 00:32:42.983 [Pipeline] // node 00:32:42.988 [Pipeline] End of Pipeline 00:32:43.011 Finished: SUCCESS